16 resultados para Adiabatic Compression

em Universidad Politécnica de Madrid


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The mechanical response under compression of LiF single crystal micropillars oriented in the [111] direction was studied. Micropillars of different diameter (in the range 1–5 lm) were obtained by etching the matrix in directionally-solidified NaCl–LiF and KCl–LiF eutectic compounds. Selected micropillars were exposed to high-energy Ga+ ions to ascertain the effect of ion irradiation on the mechanical response. Ion irradiation led to an increase of approximately 30% in the yield strength and the maximum compressive strength but no effect of the micropillar diameter on flow stress was found in either the as-grown or the ion irradiated pillars. The dominant deformation micromechanisms were analyzed by means of crystal plasticity finite element simulations of the compression test, which explained the strong effect of micropillar misorientation on the mechanical response. Finally, the lack of size effect on the flow stress was discussed to the light of previous studies in LiF and other materials which show high lattice resistance to dislocation motion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of crystal misorientation, geometrical tilt, and contact misalignment on the compression of highly anisotropic single crystal micropillars was assessed by means of crystal plasticity finite element simulations. The investigation was focused in single crystals with the NaCl structure, like MgO or LiF, which present a marked plastic anisotropy as a result of the large difference in the critical resolved shear stress between the “soft” {110}〈110〉 and the “hard” {100}〈110〉 active slip systems. It was found that contact misalignment led to a large reduction in the initial stiffness of the micropillar in crystals oriented in the soft and hard direction. The crystallographic tilt did not modify, however, the initial crystal stiffness. From the viewpoint of the plastic response, none of the effects analyzed led to significant differences in the flow stress when the single crystals were oriented along the “soft” [100] direction. Large differences were found, however, if the single crystal was oriented in the “hard” [111] direction as a result of the activation of the soft slip system. Numerical simulations were in very good agreement with experimental literature data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Result of impact and compression tests on Chojuro, Twentieth Century, Tsu Li, and Ya Li varieties of Asian pears indicate that Chojuro pears are the firmest and most resistant to mechanical damage. At the time of harvest, Tsu Li and Ya Li pears could resist mechanical damage nearly as well as Chojuro pears, but they become more susceptible to bruising in cold storage. Twentieth Century pears are most sensitive to impact and compression bruising. Increased time in the ripening room produces more softening and increased bruise resistance of Chojuro and Twentieth Century pears than of Tsu Li and Ya Li pears.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Apple fruits, cv. Granny Smith, were subjected to mechanical impact and compression loads utilizing a steel rod with a spherical tip 19 mm diameter, 50.6 g mass. Energies applied were low enough to produce enzymatic reaction: 0.0120 J for impact, and 0.0199 J for compression. Bruised material was cut and examined with a transmission electron microscope. In both compression and impact, bruises showed a central region located in the flesh parenchyma, at a distance that approximately equalled the indentor tip radius. The parenchyma cells of this region were more altered than cells from the epidermis and hypodermis. Tissues under compression presented numerous deformed parenchyma cells with broken tonoplasts and tissue degradation as predicted by several investigators. The impacted cells supported different kinds of stresses than compressed cells, resulting in the formation of intensive vesiculation, either in the vacuole or in the middle lamella region between cell walls of adjacent cells. A large proportion of parenchyma cells completely split or had initiated splitting at the middle lamella. Bruising may develop with or without cell rupture. Therefore, cell wall rupture is not essential for the development of a bruise, at least the smallest one, as predicted previously

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel compression scheme is proposed, in which hollow targets with specifically curved structures initially filled with uniform matter, are driven by converging shock waves. The self-similar dynamics is analyzed for converging and diverging shock waves. The shock-compressed densities and pressures are much higher than those achieved using spherical shocks due to the geometric accumulation. Dynamic behavior is demonstrated using two-dimensional hydrodynamic simulations. The linear stability analysis for the spherical geometry reveals a new dispersion relation with cut-off mode numbers as a function of the specific heat ratio, above which eigenmode perturbations are smeared out in the converging phase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Various researchers have developed models of conventional H2O–LiBr absorption machines with the aim of predicting their performance. In this paper, the methodology of characteristic equations developed by Hellmann et al. (1998) is applied. This model is able to represent the capacity of single effect absorption chillers and heat pumps by means of simple algebraic equations. An extended characteristic equation based on a characteristic temperature difference has been obtained, considering the facility features. As a result, it is concluded that for adiabatic absorbers a subcooling temperature must be specified. The effect of evaporator overflow has been characterized. Its influence on cooling capacity has been included in the extended characteristic equation. Taking into account the particular design and operation features, a good agreement between experimental performance data and those obtained through the extended characteristic equation has been achieved at off-design operation. This allows its use for simulation and control purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of the temperature on the compressive stress–strain behavior of Al/SiC nanoscale multilayers was studied by means of micropillar compression tests at 23 °C and 100 °C. The multilayers (composed of alternating layers of 60 nm in thickness of nanocrystalline Al and amorphous SiC) showed a very large hardening rate at 23 °C, which led to a flow stress of 3.1 ± 0.2 GPa at 8% strain. However, the flow stress (and the hardening rate) was reduced by 50% at 100 °C. Plastic deformation of the Al layers was the dominant deformation mechanism at both temperatures, but the Al layers were extruded out of the micropillar at 100 °C, while Al plastic flow was constrained by the SiC elastic layers at 23 °C. Finite element simulations of the micropillar compression test indicated the role played by different factors (flow stress of Al, interface strength and friction coefficient) on the mechanical behavior and were able to rationalize the differences in the stress–strain curves between 23 °C and 100 °C.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, a new methodology is devised to obtain the fracture properties of nuclear fuel cladding in the hoop direction. The proposed method combines ring compression tests and a finite element method that includes a damage model based on cohesive crack theory, applied to unirradiated hydrogen-charged ZIRLOTM nuclear fuel cladding. Samples with hydrogen concentrations from 0 to 2000 ppm were tested at 20 �C. Agreement between the finite element simulations and the experimental results is excellent in all cases. The parameters of the cohesive crack model are obtained from the simulations, with the fracture energy and fracture toughness being calculated in turn. The evolution of fracture toughness in the hoop direction with the hydrogen concentration (up to 2000 ppm) is reported for the first time for ZIRLOTM cladding. Additionally, the fracture micromechanisms are examined as a function of the hydrogen concentration. In the as-received samples, the micromechanism is the nucleation, growth and coalescence of voids, whereas in the samples with 2000 ppm, a combination of cuasicleavage and plastic deformation, along with secondary microcracking is observed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cyclic compression of several granular systems has been simulated with a molecular dynamics code. All the samples consisted of bidimensional, soft, frictionless and equal-sized particles that were initially arranged according to a squared lattice and were compressed by randomly generated irregular walls. The compression protocols can be described by some control variables (volume or external force acting on the walls) and by some dimensionless factors, that relate stiffness, density, diameter, damping ratio and water surface tension to the external forces, displacements and periods. Each protocol, that is associated to a dynamic process, results in an arrangement with its own macroscopic features: volume (or packing ratio), coordination number, and stress; and the differences between packings can be highly significant. The statistical distribution of the force-moment state of the particles (i.e. the equivalent average stress multiplied by the volume) is analyzed. In spite of the lack of a theoretical framework based on statistical mechanics specific for these protocols, it is shown how the obtained distributions of mean and relative deviatoric force-moment are. Then it is discussed on the nature of these distributions and on their relation to specific protocols.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Se presenta a continuación un modelo de una planta del almacenamiento de energía mediante aire comprimido siguiendo un proceso adiabático. En esta planta la energía eólica sobrante se usa para comprimir aire mediante un tren de compresión de 25 MW, el aire comprimido será después almacenado en una caverna de sal a 770 metros de profundidad. La compresión se llevará a cabo por la noche, durante 6 horas, debido a los bajos precios de electricidad. Cuando los precios de la electricidad suben durante el día, el aire comprimido es extraído de la caverna de sal y es utilizado para producir energía en un tren de expansión de 70 MW durante 3 horas. La localización elegida para la planta es el norte de Burgos (Castilla y León, España), debido a la coincidencia de la existencia de muchos parques eólicos y una formación con las propiedades necesarias para el almacenamiento. El aspecto más importante de este proyecto es la utilización de un almacenamiento térmico que permitirá aprovechar el calor de la compresión para calentar el aire a la entrada de la expansión, eliminando combustibles fósiles del sistema. Por consiguiente, este proyecto es una atractiva solución en un posible futuro con emisiones de carbono restringidas, cuando la integración de energía renovable en la red eléctrica supone un reto importante. ABSTRACT: A model of an adiabatic compressed air energy storage plant is presented. In this plant surplus wind energy is used to compress air by means of a 25 MW compression train, the compressed air will be later stored in a salt cavern at 770 meters depth. Compression is carried out at night time, during 6 hours, because power prices are lower. When power prices go up during the day, the compressed air is withdrawn from the salt cavern and is used to produce energy in an expansion train of 70 MW during 3 hours. The chosen location for the plant is in the north of Burgos (Castilla y León, Spain), due to both the existence of several wind farms and a suitable storage facility with good properties at the same place. The relevance of this project is that it is provided with a thermal storage, which allows using the generated heat in the compression for re-heating the air before the expansion, eliminating fossil fuels from the system. Hence, this system is an attractive load balancing solution in a possibly carbon-constrained future, where the integration of renewable energy sources into the electric grid is a major challenge.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many applications (like social or sensor networks) the in- formation generated can be represented as a continuous stream of RDF items, where each item describes an application event (social network post, sensor measurement, etc). In this paper we focus on compressing RDF streams. In particular, we propose an approach for lossless RDF stream compression, named RDSZ (RDF Differential Stream compressor based on Zlib). This approach takes advantage of the structural similarities among items in a stream by combining a differential item encoding mechanism with the general purpose stream compressor Zlib. Empirical evaluation using several RDF stream datasets shows that this combi- nation produces gains in compression ratios with respect to using Zlib alone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Los sensores de fibra óptica son una tecnología que ha madurado en los últimos años, sin embargo, se requiere un mayor desarrollo de aplicaciones para materiales naturales como las rocas, que por ser agregados complejos pueden contener partículas minerales y fracturas de tamaño mucho mayor que las galgas eléctricas usadas tradicionalmente para medir deformaciones en las pruebas de laboratorio, ocasionando que los resultados obtenidos puedan ser no representativos. En este trabajo fueron diseñados, fabricados y probados sensores de deformación de gran área y forma curvada, usando redes de Bragg en fibra óptica (FBG) con el objetivo de obtener registros representativos en rocas que contienen minerales y estructuras de diversas composiciones, tamaños y direcciones. Se presenta el proceso de elaboración del transductor, su caracterización mecánica, su calibración y su evaluación en pruebas de compresión uniaxial en muestras de roca. Para verificar la eficiencia en la transmisión de la deformación de la roca al sensor una vez pegado, también fue realizado el análisis de la transferencia incluyendo los efectos del adhesivo, de la muestra y del transductor. Los resultados experimentales indican que el sensor desarrollado permite registro y transferencia de la deformación fiables, avance necesario para uso en rocas y otros materiales heterogénos, señalando una interesante perspectiva para aplicaciones sobre superficies irregulares, pues permite aumentar a voluntad el tamaño y forma del área de registro, posibilita también obtener mayor fiabilidad de resultados en muestras de pequeño tamaño y sugiere su conveniencia en obras, en las cuales los sistemas eléctricos tradicionales tienen limitaciones. ABSTRACT Optical fiber sensors are a technology that has matured in recent years, however, further development for rock applications is needed. Rocks contain mineral particles and features larger than electrical strain gauges traditionally used in laboratory tests, causing the results to be unrepresentative. In this work were designed, manufactured, and tested large area and curved shape strain gages, using fiber Bragg gratings in optical fiber (FBG) in order to obtain representative measurement on surface rocks samples containing minerals and structures of different compositions, sizes and directions. This reports presents the processes of manufacturing, mechanical characterization, calibration and evaluation under uniaxial compression tests on rock samples. To verify the efficiency of rock deformation transmitted to attached sensor, it was also performed the analysis of the strain transfer including the effects of the bonding, the sample and the transducer. The experimental results indicate that the developed sensor enables reliable measurements of the strain and its transmission from rock to sensor, appropriate for use in heterogeneous materials, pointing an interesting perspective for applications on irregular surfaces, allowing increasing at will the size and shape of the measurement area. This research suggests suitability of the optical strain gauge for real scale, where traditional electrical systems have demonstrated some limitations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Debido al creciente aumento del tamaño de los datos en muchos de los actuales sistemas de información, muchos de los algoritmos de recorrido de estas estructuras pierden rendimento para realizar búsquedas en estos. Debido a que la representacion de estos datos en muchos casos se realiza mediante estructuras nodo-vertice (Grafos), en el año 2009 se creó el reto Graph500. Con anterioridad, otros retos como Top500 servían para medir el rendimiento en base a la capacidad de cálculo de los sistemas, mediante tests LINPACK. En caso de Graph500 la medicion se realiza mediante la ejecución de un algoritmo de recorrido en anchura de grafos (BFS en inglés) aplicada a Grafos. El algoritmo BFS es uno de los pilares de otros muchos algoritmos utilizados en grafos como SSSP, shortest path o Betweeness centrality. Una mejora en este ayudaría a la mejora de los otros que lo utilizan. Analisis del Problema El algoritmos BFS utilizado en los sistemas de computación de alto rendimiento (HPC en ingles) es usualmente una version para sistemas distribuidos del algoritmo secuencial original. En esta versión distribuida se inicia la ejecución realizando un particionado del grafo y posteriormente cada uno de los procesadores distribuidos computará una parte y distribuirá sus resultados a los demás sistemas. Debido a que la diferencia de velocidad entre el procesamiento en cada uno de estos nodos y la transfencia de datos por la red de interconexión es muy alta (estando en desventaja la red de interconexion) han sido bastantes las aproximaciones tomadas para reducir la perdida de rendimiento al realizar transferencias. Respecto al particionado inicial del grafo, el enfoque tradicional (llamado 1D-partitioned graph en ingles) consiste en asignar a cada nodo unos vertices fijos que él procesará. Para disminuir el tráfico de datos se propuso otro particionado (2D) en el cual la distribución se haciá en base a las aristas del grafo, en vez de a los vertices. Este particionado reducía el trafico en la red en una proporcion O(NxM) a O(log(N)). Si bien han habido otros enfoques para reducir la transferecnia como: reordemaniento inicial de los vertices para añadir localidad en los nodos, o particionados dinámicos, el enfoque que se va a proponer en este trabajo va a consistir en aplicar técnicas recientes de compression de grandes sistemas de datos como Bases de datos de alto volume o motores de búsqueda en internet para comprimir los datos de las transferencias entre nodos.---ABSTRACT---The Breadth First Search (BFS) algorithm is the foundation and building block of many higher graph-based operations such as spanning trees, shortest paths and betweenness centrality. The importance of this algorithm increases each day due to it is a key requirement for many data structures which are becoming popular nowadays. These data structures turn out to be internally graph structures. When the BFS algorithm is parallelized and the data is distributed into several processors, some research shows a performance limitation introduced by the interconnection network [31]. Hence, improvements on the area of communications may benefit the global performance in this key algorithm. In this work it is presented an alternative compression mechanism. It differs with current existing methods in that it is aware of characteristics of the data which may benefit the compression. Apart from this, we will perform a other test to see how this algorithm (in a dis- tributed scenario) benefits from traditional instruction-based optimizations. Last, we will review the current supercomputing techniques and the related work being done in the area.