845 resultados para Network scale-up method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Co-combustion performance trials of Meat and Bone Meal (MBM) and peat were conducted using a bubbling fluidized bed (BFB) reactor. In the combustion performance trials the effects of the co-combustion of MBM and peat on flue gas emissions, bed fluidization, ash agglomeration tendency in the bed and the composition and quality of the ash were studied. MBM was mixed with peat at 6 levels between 15% and 100%. Emissions were predominantly below regulatory limits. CO concentrations in the flue gas only exceeded the 100 mg/m3 limit upon combustion of pure MBM. SO2 emissions were found to be over the limit of 50 mg/m3, while in all trials NOx emissions were below the limit of 300 mg/m3. The HCl content of the flue gases was found to vary near the limit of 30 mg/m3. VOCs however were within their limits. The problem of bed agglomeration was avoided when the bed temperature was about 850 °C and only 20% MBM was co-combusted. This study indicates that a pilot scale BFB reactor can, under optimum conditions, be operated within emission limits when MBM is used as a co-fuel with peat. This can provide a basis for further scale-up development work in industrial scale BFB applications

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a world where data is captured on a large scale the major challenge for data mining algorithms is to be able to scale up to large datasets. There are two main approaches to inducing classification rules, one is the divide and conquer approach, also known as the top down induction of decision trees; the other approach is called the separate and conquer approach. A considerable amount of work has been done on scaling up the divide and conquer approach. However, very little work has been conducted on scaling up the separate and conquer approach.In this work we describe a parallel framework that allows the parallelisation of a certain family of separate and conquer algorithms, the Prism family. Parallelisation helps the Prism family of algorithms to harvest additional computer resources in a network of computers in order to make the induction of classification rules scale better on large datasets. Our framework also incorporates a pre-pruning facility for parallel Prism algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The fast increase in the size and number of databases demands data mining approaches that are scalable to large amounts of data. This has led to the exploration of parallel computing technologies in order to perform data mining tasks concurrently using several processors. Parallelization seems to be a natural and cost-effective way to scale up data mining technologies. One of the most important of these data mining technologies is the classification of newly recorded data. This paper surveys advances in parallelization in the field of classification rule induction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scale up of Spark Plasma Sintering (SPS) for the consolidation of large square monoliths (50 × 50 × 3 mm3) of thermoelectric material is demonstrated and the properties of the fabricated samples compared with those from laboratory scale SPS. The SPS processing of n-type TiS2 and p-type Cu10.4Ni1.6Sb4S13 produces highly dense compacts of phase pure material. Electrical and thermal transport property measurements reveal that the thermoelectric performance of the consolidated n- and p-type materials is comparable with that of material processed using laboratory scale SPS, with ZT values that approach 0.8 and 0.35 at 700 K for Cu10.4Ni1.6Sb4S13 and TiS2, respectively. Mechanical properties of the consolidated materials shows that large-scale SPS processing produces highly homogeneous materials with hardness and elastic moduli that deviate little from values obtained on materials processed on the laboratory scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. These large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Land cover data derived from satellites are commonly used to prescribe inputs to models of the land surface. Since such data inevitably contains errors, quantifying how uncertainties in the data affect a model’s output is important. To do so, a spatial distribution of possible land cover values is required to propagate through the model’s simulation. However, at large scales, such as those required for climate models, such spatial modelling can be difficult. Also, computer models often require land cover proportions at sites larger than the original map scale as inputs, and it is the uncertainty in these proportions that this article discusses. This paper describes a Monte Carlo sampling scheme that generates realisations of land cover proportions from the posterior distribution as implied by a Bayesian analysis that combines spatial information in the land cover map and its associated confusion matrix. The technique is computationally simple and has been applied previously to the Land Cover Map 2000 for the region of England and Wales. This article demonstrates the ability of the technique to scale up to large (global) satellite derived land cover maps and reports its application to the GlobCover 2009 data product. The results show that, in general, the GlobCover data possesses only small biases, with the largest belonging to non–vegetated surfaces. In vegetated surfaces, the most prominent area of uncertainty is Southern Africa, which represents a complex heterogeneous landscape. It is also clear from this study that greater resources need to be devoted to the construction of comprehensive confusion matrices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Agricultural and agro-industrial residues are often considered both an environmental and an economical problem. Therefore, a paradigm shift is needed, assuming residues as biorefinery feedstocks. In this work cherimoya (Annona cherimola Mill.) seeds, which are lipid-rich (ca. 30%) and have a significant lignocellulosic fraction, were used as an example of a residue without any current valorization. Firstly, the lipid fraction was obtained by solvent extraction. Extraction yield varied from 13% to 28%, according to the extraction method and time, and solvent purity. This oil was converted into biodiesel (by base-catalyzed transesterification), yielding 76 g FAME/100 g oil. The obtained biodiesel is likely to be incorporated in the commercial chain, according to the EN14214 standard. The remaining lignocellulosic fraction was subjected to two alternative fractionation processes for the selective recovery of hemicellulose, aiming different products. Empirical mathematical models were developed for both processes, aiming future scale-up. Autohydrolysis rendered essentially oligosaccharides (10 gL-1) with properties indicating potential food/feed/pharmacological applications. The remaining solid was enzymatically saccharified, reaching a saccharification yield of 83%. The hydrolyzate obtained by dilute acid hydrolysis contained mostly monosaccharides, mainly xylose (26 gL-1), glucose (10 gL-1) and arabinose (3 gL-1), and had low content of microbial growth inhibitors. This hydrolyzate has proven to be appropriate to be used as culture media for exopolisaccharide production, using bacteria or microbial consortia. The maximum conversion of monosaccharides into xanthan gum was 0.87 g/g and kefiran maximum productivity was 0.07 g.(Lh)-1. This work shows the technical feasibility of using cherimoya seeds, and materials as such, as potential feedstocks, opening new perspectives for upgrading them in the biorefinery framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gene therapy is based on the transfer of exogenous genetic material into cells or tissues in order to correct, supplement or silencing a particular gene. To achieve this goal, efficient vehicles, viral or non-viral, should be developed. The aim of this work was to produce and evaluate a nanoemulsion system as a possible carrier for no-viral gene therapy able to load a plasmid model (pIRES2-EGFP). The nanoemulsion was produced by the sonication method, after been choose in a pseudo-ternary phase diagram build with 5 % of Captex 355®, 1.2 % of Tween 80®, 0.8 % of Span 80®, 0.16% of stearylamine and water (to 100 %). Measurements of droplet size, polydispersity index (PI), zeta potential, pH and conductivity, were performed to characterize the system. Results showed droplets smaller than 200 nm (PI < 0.2) and zeta potential > 30 mV. The formulation pH was near to 7.0 and conductivity was that expected to oil in water systems (70 to 90 μS/s) A scale up study, the stability of the system and the best sterilization method were also evaluated. We found that the system may be scaled up considering the time of sonication according to the volume produced, filtration was the best sterilization process and nanoemulsions were stable by 180 days at 4 ºC. Once developed, the complexation efficiency of the plasmid (pDNA) by the system was tested by agarose gel electrophoresis retardation assay.. The complexation efficiency increases when stearylamine was incorporated into aqueous phase (from 46 to 115 ng/μL); regarding a contact period (nanoemulsion / pDNA) of at least 2 hours in an ice bath, for complete lipoplex formation. The nanoemulsion showed low toxicity in MRC-5 cells at the usual transfection concentration, 81.49 % of survival was found. So, it can be concluded that a nanoemulsion in which a plasmid model was loaded was achieved. However, further studies concerning transfectation efficiency should be performed to confirm the system as non-viral gene carrier

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJETIVO: Avaliar a intensidade de dor e o nível de funcionalidade em pacientes submetidos à cirurgia cardíaca nos períodos pré-operatório, 7º pós-operatório e alta hospitalar, relacionando-os entre si. Relacionar funcionalidade com: sexo, faixa etária, primeira cirurgia cardíaca ou reoperação, uso de circulação extracorpórea (CEC), tipo de cirurgia e acompanhamento fisioterapêutico. MÉTODO: Foram estudados 41 pacientes que realizaram cirurgia cardíaca eletiva por toracotomia médio-esternal (TME) no HC da Faculdade de Medicina de Botucatu/UNESP. A intensidade de dor foi avaliada pela escala de VAS e a funcionalidade, pela escala MIF (medida de independência funcional) no domínio físico. RESULTADOS: A intensidade de dor mais elevada foi no 7º pós-operatório comparado com os momentos pré-operatório e alta. No pré-operatório, não houve índice de dor; na alta, a intensidade mediana foi 3 (dor moderada). Os níveis mais elevados de perda funcional ocorreram no 7º pós-operatório, quando comparados com os escores totais do pré-operatório e da alta. Verificou-se correlação significativa entre dor e funcionalidade, demonstrando que o decréscimo do nível de dor entre o 7º pós-operatório e a alta contribuiu para a elevação dos níveis funcionais. CONCLUSÃO: As avaliações realizadas no pré-operatório proporcionaram resultados preditivos a serem alcançados. As avaliações realizadas no 7º pós-operatório e na alta possibilitaram a classificação dos pacientes de acordo com perdas e ganhos, indicando aqueles que necessitavam de maior cuidado e treinamento em suas capacidades.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the short term transmission network expansion planning (STTNEP) is solved through a specialized genetic algorithm (SGA). A complete AC model of the transmission network is used, which permits the formulation of an integrated power system transmission network expansion planning problem (real and reactive power planning). The characteristics of the proposed SGA to solve the STTNEP problem are detailed and an interior point method is employed to solve nonlinear programming problems during the solution steps of the SGA. Results of tests carried out with two electrical energy systems show the capabilities of the SGA and also the viability of using the AC model to solve the STTNEP problem. © 2009 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Great efforts have been made to increase accessibility of HIV antiretroviral therapy (ART) in low and middle-income countries. The threat of wide-scale emergence of drug resistance could severely hamper ART scale-up efforts. Population-based surveillance of transmitted HIV drug resistance ensures the use of appropriate first-line regimens to maximize efficacy of ART programs where drug options are limited. However, traditional HIV genotyping is extremely expensive, providing a cost barrier to wide-scale and frequent HIV drug resistance surveillance. Methods/Results: We have developed a low-cost laboratory-scale next-generation sequencing-based genotyping method to monitor drug resistance. We designed primers specifically to amplify protease and reverse transcriptase from Brazilian HIV subtypes and developed a multiplexing scheme using multiplex identifier tags to minimize cost while providing more robust data than traditional genotyping techniques. Using this approach, we characterized drug resistance from plasma in 81 HIV infected individuals collected in Sao Paulo, Brazil. We describe the complexities of analyzing next-generation sequencing data and present a simplified open-source workflow to analyze drug resistance data. From this data, we identified drug resistance mutations in 20% of treatment naive individuals in our cohort, which is similar to frequencies identified using traditional genotyping in Brazilian patient samples. Conclusion: The developed ultra-wide sequencing approach described here allows multiplexing of at least 48 patient samples per sequencing run, 4 times more than the current genotyping method. This method is also 4-fold more sensitive (5% minimal detection frequency vs. 20%) at a cost 3-5 x less than the traditional Sanger-based genotyping method. Lastly, by using a benchtop next-generation sequencer (Roche/454 GS Junior), this approach can be more easily implemented in low-resource settings. This data provides proof-of-concept that next-generation HIV drug resistance genotyping is a feasible and low-cost alternative to current genotyping methods and may be particularly beneficial for in-country surveillance of transmitted drug resistance.