905 resultados para high performance concrete.


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Civil

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[Excerpt] The aim of this research was to evaluate the influence of temperature, time and mass/ volume ratio on the release of sugars and polyphenols using an autohydrolysis procedure from pineapple waste. A Box-Bhenken design was used with three factors (time, temperature and mass/volume ratio) and three levels was used. All treatments were performed in triplicate. Nine central points were used. For autohydrlosysis treatments, an oil bath was used [1]. After autohydrolysis, liquid phases or hydrolysates were analyzed for glucose and fructose concentration by high performance liquid chromatography (HPLC) [2]. The FolinCiocalteu assay was used to measure total polyphenols of hydrolysates [3] and HPLC to identify these molecules [4]. (...)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aromatic amines are widely used industrial chemicals as their major sources in the environment include several chemical industry sectors such as oil refining, synthetic polymers, dyes, adhesives, rubbers, perfume, pharmaceuticals, pesticides and explosives. They result also from diesel exhaust, combustion of wood chips and rubber and tobacco smoke. Some types of aromatic amines are generated during cooking, special grilled meat and fish, as well. The intensive use and production of these compounds explains its occurrence in the environment such as in air, water and soil, thereby creating a potential for human exposure. Since aromatic amines are potential carcinogenic and toxic agents, they constitute an important class of environmental pollutants of enormous concern, which efficient removal is a crucial task for researchers, so several methods have been investigated and applied. In this chapter the types and general properties of aromatic amine compounds are reviewed. As aromatic amines are continuously entering the environment from various sources and have been designated as high priority pollutants, their presence in the environment must be monitored at concentration levels lower than 30 mg L1, compatible with the limits allowed by the regulations. Consequently, most relevant analytical methods to detect the aromatic amines composition in environmental matrices, and for monitoring their degradation, are essential and will be presented. Those include Spectroscopy, namely UV/visible and Fourier Transform Infrared Spectroscopy (FTIR); Chromatography, in particular Thin Layer (TLC), High Performance Liquid (HPLC) and Gas chromatography (GC); Capillary electrophoresis (CE); Mass spectrometry (MS) and combination of different methods including GC-MS, HPLC-MS and CE-MS. Choosing the best methods depend on their availability, costs, detection limit and sample concentration, which sometimes need to be concentrate or pretreated. However, combined methods may give more complete results based on the complementary information. The environmental impact, toxicity and carcinogenicity of many aromatic amines have been reported and are emphasized in this chapter too. Lately, the conventional aromatic amines degradation and the alternative biodegradation processes are highlighted. Parameters affecting biodegradation, role of different electron acceptors in aerobic and anaerobic biodegradation and kinetics are discussed. Conventional processes including extraction, adsorption onto activated carbon, chemical oxidation, advanced oxidation, electrochemical techniques and irradiation suffer from drawbacks including high costs, formation of hazardous by-products and low efficiency. Biological processes, taking advantage of the naturally processes occurring in environment, have been developed and tested, proved as an economic, energy efficient and environmentally feasible alternative. Aerobic biodegradation is one of the most promising techniques for aromatic amines remediation, but has the drawback of aromatic amines autooxidation once they are exposed to oxygen, instead of their degradation. Higher costs, especially due to power consumption for aeration, can also limit its application. Anaerobic degradation technology is the novel path for treatment of a wide variety of aromatic amines, including industrial wastewater, and will be discussed. However, some are difficult to degrade under anaerobic conditions and, thus, other electron acceptors such as nitrate, iron, sulphate, manganese and carbonate have, alternatively, been tested.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de mestrado em Técnicas de Caracterização e Análise Química

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de mestrado em Técnicas de Caracterização e Análise Química

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An appropriate assessment of end-to-end network performance presumes highly efficient time tracking and measurement with precise time control of the stopping and resuming of program operation. In this paper, a novel approach to solving the problems of highly efficient and precise time measurements on PC-platforms and on ARM-architectures is proposed. A new unified High Performance Timer and a corresponding software library offer a unified interface to the known time counters and automatically identify the fastest and most reliable time source, available in the user space of a computing system. The research is focused on developing an approach of unified time acquisition from the PC hardware and accordingly substituting the common way of getting the time value through Linux system calls. The presented approach provides a much faster means of obtaining the time values with a nanosecond precision than by using conventional means. Moreover, it is capable of handling the sequential time value, precise sleep functions and process resuming. This ability means the reduction of wasting computer resources during the execution of a sleeping process from 100% (busy-wait) to 1-1.5%, whereas the benefits of very accurate process resuming times on long waits are maintained.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work describes a test tool that allows to make performance tests of different end-to-end available bandwidth estimation algorithms along with their different implementations. The goal of such tests is to find the best-performing algorithm and its implementation and use it in congestion control mechanism for high-performance reliable transport protocols. The main idea of this paper is to describe the options which provide available bandwidth estimation mechanism for highspeed data transport protocols and to develop basic functionality of such test tool with which it will be possible to manage entities of test application on all involved testing hosts, aided by some middleware.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This note describes ParallelKnoppix, a bootable CD that allows econometricians with average knowledge of computers to create and begin using a high performance computing cluster for parallel computing in very little time. The computers used may be heterogeneous machines, and clusters of up to 200 nodes are supported. When the cluster is shut down, all machines are in their original state, so their temporary use in the cluster does not interfere with their normal uses. An example shows how a Monte Carlo study of a bootstrap test procedure may be done in parallel. Using a cluster of 20 nodes, the example runs approximately 20 times faster than it does on a single computer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Los requisitos de los dispositivos empleados en los nuevos sistemas de telecomunicaciones son: unas avanzadas prestaciones, reducido tamaño y bajo coste. Actualmente, se utilizan filtros basados en la tecnología SAW para trabajar a frecuencias de microondas. Dado los inconvenientes que presentan dicho tipo de filtros, se pretende substituirlos por filtros basados en tecnología BAW. La topología en escalera es hasta el momento la más extendida para diseñar estos filtros.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Durant les darreres dècades, i degut, principalment, a un canvi en els hàbits alimentaris, hi ha hagut un augment a nivell mundial de malalties cròniques (l’obesitat, malalties cardiovasculars, etc.). En els països mediterranis hi ha menys incidència d’aquestes malalties i sembla ser que això es deu a l’anomenada dieta mediterrània. La dieta mediterrània es caracteritza per una combinació d’oli d’oliva com a grassa principal, verdures, hortalisses i fruites en abundància, lleguminoses, fruits secs, formatges i iogurt, peix, pa, pasta, cereals i els seus derivats i un consum moderat de vi i carns. Aquest model alimentari, ric en tocoferols, fitosterols i fitoestanols que ajuden a reduir el contingut de colesterol en sang, fa que en les poblacions mediterrànies hi hagi menys incidència de malalties cardiovasculars. Aquests compostos inhibeixen el deteriorament oxidatiu dels olis, actuen com agent antipolimerització per olis de fregir. Tenen capacitat de reduir els nivells de colesterol, evitant la incidència de malalties cardiovasculars. Els fitoesterols y fitoestanols es poden trobar en forma lliure o esterificada amb àcids grassos, àcids fenòlics i glucosa. Els objectius d’ aquest treball han estat, primer en el desenvolupament de mètodes d'anàlisi ràpids, fiables i robusts dels tocoferols, fitoesterols i fitoestanols i la seva aplicació en fruits sec, oli de segó, oli de pinyol de raïm i productes que els continguin. El primer mètode va estar basat en la cromatografía líquida (HPLC-DAD) amb extracció en fase sòlida (SPE) com tècnica alternativa a la saponificació para la determinació de fitoesterols lliures. Aquest mètode va estar aplicada a mostres de bombons que contenia fitoesterols. El segon mètode va estar basat en la cromatografia de gasos (GCFID) amb aponificació i SPE per quantificar fitoesterols i fitoestanols lliures, esterificats i totals. En els documents annexos es descriuen a profunditat els mètodes desenvolupats.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A mesura que la investigació depèn cada vegada més dels computadors, l'emmagatzematge de dades comença a convertir-se en un recurs escàs per als projectes, i suposa una gran part del cost total. Alguns projectes intenten resoldre aquest problema emprant emmagatzament distribuït. És doncs necessari que alguns centres proveeixin de grans quantitats d'emmagatzematge massiu de baix cost basat en cintes magnètiques. L'inconvenient d'aquesta solució és que el rendiment disminueix, particularment a l'hora de tractar-se de grans quantitats d'arxius petits. El nostre objectiu és crear un híbrid entre un sistema d'alt cost i rendiment basat en discs, i un de baix cost i rendiment basat en cintes. Per això, unirem dCache, un sistema d'emmagatzematge distribuït, amb Castor, un sistema d'emmagatzematge jeràrquic, creant sistemes de fitxers virtuals que contindran grans quantitats d'arxius petits per millorar el rendiment global del sistema.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La creixent utilització de sistemes de comunicacions mòbils ha impulsat la demanda de filtres passabanda miniaturitzats d'elevades prestacions operant en el rang de freqüències de microones. Els Film Bulk Acoustic Resonators (FBAR) estan esdevenint la principal alternativa als filtres basats en ressonadors Surface Acoustic Wave (SAW) o als basats en ressonadors ceràmics. Els Stacked Crystal Filters (SCF) i els Coupled Resonator Filters (CRF) són configuracions FBAR que permeten assolir una excel·lent atenuació en la banda de refús. Aquest treball presenta un innovador circuit equivalent elèctric que modela el CRF. Llavors, es desenvolupa una metodologia de síntesi de filtres per al SCF i per al CRF utilitzant els seus circuits equivalents elèctrics. La metodologia de disseny presentada permet obtenir les dimensions de l'estructura del filtre acústic partint de les especificacions del filtre i de les restriccions pròpies de la tecnologia. S'han implementat diferents respostes de Chebyshev per a sistemes de comunicacions reals per tal de validar el procediment de disseny dels filtres obtenint els resultats esperats.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Durant els últims anys la demanda de filtres pas banda de ràdio freqüència, de reduïdes dimensions, lleugers i d'elevades prestacions destinats a sistemes de comunicacions inalàmbriques s'ha incrementat de forma significativa. Aquests sistemes principalment són els sistemes de telefonia mòbil de tercera generació UMTS y el sistema de navegació GPS. Els filtres actuals, basats en ressonadors SAW (Surface Acoustic Wave), tenen unes dimensions reduïdes però estan limitats en freqüència (3 GHz) i la seva tecnologia no és compatible amb les tecnologies estàndards de circuits integrats. Per aquestes raons s'espera que els filtres basats en ressonadors BAW (Bulk Acoustic Wave) substitueixin als SAW. Els dos tenen dimensions similars, però els filtres BAW poden funcionar a freqüències superiors a 3 GHz, poden treballar amb nivells de potència majors, i és important destacar el fet que la seva tecnologia és compatible amb les tecnologies estàndards de circuits integrats. La investigació en l'àmbit dels filtres BAW s'ha centrat en millorar els processos tecnològics i la qualitat dels materials, però s'ha treballat poc en l'adaptació de les tècniques sistemàtiques de disseny de filtres a les particularitats d'aquesta tecnologia, per tant el principal objectiu d'aquest treball és presentar mètodes sistemàtics per al disseny de filtres BAW, centrant-se en l'estudi d’estructures apilades.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The photosensitizing properties of m-tetrahydroxyphenylchlorin (mTHPC) and polyethylene glycol-derivatized mTHPC (pegylated mTHPC) were compared in nude mice bearing human malignant mesothelioma, squamous cell carcinoma and adenocarcinoma xenografts. Laser light (20 J/cm2) at 652 nm was delivered to the tumour (surface irradiance) and to an equal-sized area of the hind leg of the animals after i.p. administration of 0.1 mg/kg body weight mTHPC and an equimolar dose of pegylated mTHPC, respectively. The extent of tumour necrosis and normal tissue injury was assessed by histology. Both mTHPC and pegylated mTHPC catalyse photosensitized necrosis in mesothelioma xenografts at drug-light intervals of 1-4 days. The onset of action of pegylated mTHPC seemed slower but significantly exceeds that of mTHPC by days 3 and 4 with the greatest difference being noted at day 4. Pegylated mTHPC also induced significantly larger photonecrosis than mTHPC in squamous cell xenografts but not in adenocarcinoma at day 4, where mTHPC showed greatest activity. The degree of necrosis induced by pegylated mTHPC was the same for all three xenografts. mTHPC led to necrosis of skin and underlying muscle at a drug-light interval of 1 day but minor histological changes only at drug-light intervals from 2-4 days. In contrast, pegylated mTHPC did not result in histologically detectable changes in normal tissues under the same treatment conditions at any drug-light interval assessed. In this study, pegylated mTHPC had advantages as a photosensitizer compared to mTHPC. Tissue concentrations of mTHPC and pegylated mTHPC were measured by high-performance liquid chromatography in non-irradiated animals 4 days after administration. There was no significant difference in tumour uptake between the two sensitizers in mesothelioma, adenocarcinoma and squamous cell carcinoma xenografts. Tissue concentration measurements were of limited use for predicting photosensitization in this model.