19 resultados para Compression Metric
em Universidad Politécnica de Madrid
Resumo:
Hojas Kilométricas (Kilometric Sheets). Specifically, the study focuses on those sheets referring to the city centre and surrounding area of the Royal Site of Aranjuez, a town in the south of the Province of Madrid. The aim of this study is to restore the actual size and measurements of scanned images of the Hojas Kilométricas. This would allow us, among other things, to reestablish both the format and scale of the original plans. To achieve this goal it is necessary to rectify and then georeference these images, i.e. assign them a geographic reference system. This procedure is essential in the overlaying and comparison of the Hojas Kilométricas of the Royal Site with other historical cartography as well as other sources related to the same area from different time periods. Subsequent research would allow us, for example, to reconstruct the time-evolution of the urban area, to spot new construction and to pinpoint the locations of any altered or missing buildings or architectural features. In addition, this would allow us to develop and integrate databases for GIS models applicable to the management of our cultural heritage.
Resumo:
A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.
Resumo:
Métrica de calidad de video de alta definición construida a partir de ratios de referencia completa. La medida de calidad de video, en inglés Visual Quality Assessment (VQA), es uno de los mayores retos por solucionar en el entorno multimedia. La calidad de vídeo tiene un impacto altísimo en la percepción del usuario final (consumidor) de los servicios sustentados en la provisión de contenidos multimedia y, por tanto, factor clave en la valoración del nuevo paradigma denominado Calidad de la Experiencia, en inglés Quality of Experience (QoE). Los modelos de medida de calidad de vídeo se pueden agrupar en varias ramas según la base técnica que sustenta el sistema de medida, destacando en importancia los que emplean modelos psicovisuales orientados a reproducir las características del sistema visual humano, en inglés Human Visual System, del que toman sus siglas HVS, y los que, por el contrario, optan por una aproximación ingenieril en la que el cálculo de calidad está basado en la extracción de parámetros intrínsecos de la imagen y su comparación. A pesar de los avances recogidos en este campo en los últimos años, la investigación en métricas de calidad de vídeo, tanto en presencia de referencia (los modelos denominados de referencia completa), como en presencia de parte de ella (modelos de referencia reducida) e incluso los que trabajan en ausencia de la misma (denominados sin referencia), tiene un amplio camino de mejora y objetivos por alcanzar. Dentro de ellos, la medida de señales de alta definición, especialmente las utilizadas en las primeras etapas de la cadena de valor que son de muy alta calidad, son de especial interés por su influencia en la calidad final del servicio y no existen modelos fiables de medida en la actualidad. Esta tesis doctoral presenta un modelo de medida de calidad de referencia completa que hemos llamado PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), basado en la ponderación de cuatro ratios de calidad calculados a partir de características intrínsecas de la imagen. Son: El Ratio de Fidelidad, calculado mediante el gradiente morfológico o gradiente de Beucher. El Ratio de Similitud Visual, calculado mediante los puntos visualmente significativos de la imagen a través de filtrados locales de contraste. El Ratio de Nitidez, que procede de la extracción del estadístico de textura de Haralick contraste. El Ratio de Complejidad, obtenido de la definición de homogeneidad del conjunto de estadísticos de textura de Haralick PARMENIA presenta como novedad la utilización de la morfología matemática y estadísticos de Haralick como base de una métrica de medida de calidad, pues esas técnicas han estado tradicionalmente más ligadas a la teledetección y la segmentación de objetos. Además, la aproximación de la métrica como un conjunto ponderado de ratios es igualmente novedosa debido a que se alimenta de modelos de similitud estructural y otros más clásicos, basados en la perceptibilidad del error generado por la degradación de la señal asociada a la compresión. PARMENIA presenta resultados con una altísima correlación con las valoraciones MOS procedentes de las pruebas subjetivas a usuarios que se han realizado para la validación de la misma. El corpus de trabajo seleccionado procede de conjuntos de secuencias validados internacionalmente, de modo que los resultados aportados sean de la máxima calidad y el máximo rigor posible. La metodología de trabajo seguida ha consistido en la generación de un conjunto de secuencias de prueba de distintas calidades a través de la codificación con distintos escalones de cuantificación, la obtención de las valoraciones subjetivas de las mismas a través de pruebas subjetivas de calidad (basadas en la recomendación de la Unión Internacional de Telecomunicaciones BT.500), y la validación mediante el cálculo de la correlación de PARMENIA con estos valores subjetivos, cuantificada a través del coeficiente de correlación de Pearson. Una vez realizada la validación de los ratios y optimizada su influencia en la medida final y su alta correlación con la percepción, se ha realizado una segunda revisión sobre secuencias del hdtv test dataset 1 del Grupo de Expertos de Calidad de Vídeo (VQEG, Video Quality Expert Group) mostrando los resultados obtenidos sus claras ventajas. Abstract Visual Quality Assessment has been so far one of the most intriguing challenges on the media environment. Progressive evolution towards higher resolutions while increasing the quality needed (e.g. high definition and better image quality) aims to redefine models for quality measuring. Given the growing interest in multimedia services delivery, perceptual quality measurement has become a very active area of research. First, in this work, a classification of objective video quality metrics based on their underlying methodologies and approaches for measuring video quality has been introduced to sum up the state of the art. Then, this doctoral thesis describes an enhanced solution for full reference objective quality measurement based on mathematical morphology, texture features and visual similarity information that provides a normalized metric that we have called PARMENIA (PArallel Ratios MEtric from iNtrInsic features Analysis), with a high correlated MOS score. The PARMENIA metric is based on the pooling of different quality ratios that are obtained from three different approaches: Beucher’s gradient, local contrast filtering, and contrast and homogeneity Haralick’s texture features. The metric performance is excellent, and improves the current state of the art by providing a wide dynamic range that make easier to discriminate between very close quality coded sequences, especially for very high bit rates whose quality, currently, is transparent for quality metrics. PARMENIA introduces a degree of novelty against other working metrics: on the one hand, exploits the structural information variation to build the metric’s kernel, but complements the measure with texture information and a ratio of visual meaningful points that is closer to typical error sensitivity based approaches. We would like to point out that PARMENIA approach is the only metric built upon full reference ratios, and using mathematical morphology and texture features (typically used in segmentation) for quality assessment. On the other hand, it gets results with a wide dynamic range that allows measuring the quality of high definition sequences from bit rates of hundreds of Megabits (Mbps) down to typical distribution rates (5-6 Mbps), even streaming rates (1- 2 Mbps). Thus, a direct correlation between PARMENIA and MOS scores are easily constructed. PARMENIA may further enhance the number of available choices in objective quality measurement, especially for very high quality HD materials. All this results come from validation that has been achieved through internationally validated datasets on which subjective tests based on ITU-T BT.500 methodology have been carried out. Pearson correlation coefficient has been calculated to verify the accuracy of PARMENIA and its reliability.
Resumo:
The mechanical response under compression of LiF single crystal micropillars oriented in the [111] direction was studied. Micropillars of different diameter (in the range 1–5 lm) were obtained by etching the matrix in directionally-solidified NaCl–LiF and KCl–LiF eutectic compounds. Selected micropillars were exposed to high-energy Ga+ ions to ascertain the effect of ion irradiation on the mechanical response. Ion irradiation led to an increase of approximately 30% in the yield strength and the maximum compressive strength but no effect of the micropillar diameter on flow stress was found in either the as-grown or the ion irradiated pillars. The dominant deformation micromechanisms were analyzed by means of crystal plasticity finite element simulations of the compression test, which explained the strong effect of micropillar misorientation on the mechanical response. Finally, the lack of size effect on the flow stress was discussed to the light of previous studies in LiF and other materials which show high lattice resistance to dislocation motion.
Resumo:
The effect of crystal misorientation, geometrical tilt, and contact misalignment on the compression of highly anisotropic single crystal micropillars was assessed by means of crystal plasticity finite element simulations. The investigation was focused in single crystals with the NaCl structure, like MgO or LiF, which present a marked plastic anisotropy as a result of the large difference in the critical resolved shear stress between the “soft” {110}〈110〉 and the “hard” {100}〈110〉 active slip systems. It was found that contact misalignment led to a large reduction in the initial stiffness of the micropillar in crystals oriented in the soft and hard direction. The crystallographic tilt did not modify, however, the initial crystal stiffness. From the viewpoint of the plastic response, none of the effects analyzed led to significant differences in the flow stress when the single crystals were oriented along the “soft” [100] direction. Large differences were found, however, if the single crystal was oriented in the “hard” [111] direction as a result of the activation of the soft slip system. Numerical simulations were in very good agreement with experimental literature data.
Resumo:
Result of impact and compression tests on Chojuro, Twentieth Century, Tsu Li, and Ya Li varieties of Asian pears indicate that Chojuro pears are the firmest and most resistant to mechanical damage. At the time of harvest, Tsu Li and Ya Li pears could resist mechanical damage nearly as well as Chojuro pears, but they become more susceptible to bruising in cold storage. Twentieth Century pears are most sensitive to impact and compression bruising. Increased time in the ripening room produces more softening and increased bruise resistance of Chojuro and Twentieth Century pears than of Tsu Li and Ya Li pears.
Resumo:
Apple fruits, cv. Granny Smith, were subjected to mechanical impact and compression loads utilizing a steel rod with a spherical tip 19 mm diameter, 50.6 g mass. Energies applied were low enough to produce enzymatic reaction: 0.0120 J for impact, and 0.0199 J for compression. Bruised material was cut and examined with a transmission electron microscope. In both compression and impact, bruises showed a central region located in the flesh parenchyma, at a distance that approximately equalled the indentor tip radius. The parenchyma cells of this region were more altered than cells from the epidermis and hypodermis. Tissues under compression presented numerous deformed parenchyma cells with broken tonoplasts and tissue degradation as predicted by several investigators. The impacted cells supported different kinds of stresses than compressed cells, resulting in the formation of intensive vesiculation, either in the vacuole or in the middle lamella region between cell walls of adjacent cells. A large proportion of parenchyma cells completely split or had initiated splitting at the middle lamella. Bruising may develop with or without cell rupture. Therefore, cell wall rupture is not essential for the development of a bruise, at least the smallest one, as predicted previously
Resumo:
A novel compression scheme is proposed, in which hollow targets with specifically curved structures initially filled with uniform matter, are driven by converging shock waves. The self-similar dynamics is analyzed for converging and diverging shock waves. The shock-compressed densities and pressures are much higher than those achieved using spherical shocks due to the geometric accumulation. Dynamic behavior is demonstrated using two-dimensional hydrodynamic simulations. The linear stability analysis for the spherical geometry reveals a new dispersion relation with cut-off mode numbers as a function of the specific heat ratio, above which eigenmode perturbations are smeared out in the converging phase.
Resumo:
The effect of the temperature on the compressive stress–strain behavior of Al/SiC nanoscale multilayers was studied by means of micropillar compression tests at 23 °C and 100 °C. The multilayers (composed of alternating layers of 60 nm in thickness of nanocrystalline Al and amorphous SiC) showed a very large hardening rate at 23 °C, which led to a flow stress of 3.1 ± 0.2 GPa at 8% strain. However, the flow stress (and the hardening rate) was reduced by 50% at 100 °C. Plastic deformation of the Al layers was the dominant deformation mechanism at both temperatures, but the Al layers were extruded out of the micropillar at 100 °C, while Al plastic flow was constrained by the SiC elastic layers at 23 °C. Finite element simulations of the micropillar compression test indicated the role played by different factors (flow stress of Al, interface strength and friction coefficient) on the mechanical behavior and were able to rationalize the differences in the stress–strain curves between 23 °C and 100 °C.
Resumo:
In this work, a new methodology is devised to obtain the fracture properties of nuclear fuel cladding in the hoop direction. The proposed method combines ring compression tests and a finite element method that includes a damage model based on cohesive crack theory, applied to unirradiated hydrogen-charged ZIRLOTM nuclear fuel cladding. Samples with hydrogen concentrations from 0 to 2000 ppm were tested at 20 �C. Agreement between the finite element simulations and the experimental results is excellent in all cases. The parameters of the cohesive crack model are obtained from the simulations, with the fracture energy and fracture toughness being calculated in turn. The evolution of fracture toughness in the hoop direction with the hydrogen concentration (up to 2000 ppm) is reported for the first time for ZIRLOTM cladding. Additionally, the fracture micromechanisms are examined as a function of the hydrogen concentration. In the as-received samples, the micromechanism is the nucleation, growth and coalescence of voids, whereas in the samples with 2000 ppm, a combination of cuasicleavage and plastic deformation, along with secondary microcracking is observed.
Resumo:
The cyclic compression of several granular systems has been simulated with a molecular dynamics code. All the samples consisted of bidimensional, soft, frictionless and equal-sized particles that were initially arranged according to a squared lattice and were compressed by randomly generated irregular walls. The compression protocols can be described by some control variables (volume or external force acting on the walls) and by some dimensionless factors, that relate stiffness, density, diameter, damping ratio and water surface tension to the external forces, displacements and periods. Each protocol, that is associated to a dynamic process, results in an arrangement with its own macroscopic features: volume (or packing ratio), coordination number, and stress; and the differences between packings can be highly significant. The statistical distribution of the force-moment state of the particles (i.e. the equivalent average stress multiplied by the volume) is analyzed. In spite of the lack of a theoretical framework based on statistical mechanics specific for these protocols, it is shown how the obtained distributions of mean and relative deviatoric force-moment are. Then it is discussed on the nature of these distributions and on their relation to specific protocols.
Resumo:
Evaluating and measuring the pedagogical quality of Learning Objects is essential for achieving a successful web-based education. On one hand, teachers need some assurance of quality of the teaching resources before making them part of the curriculum. On the other hand, Learning Object Repositories need to include quality information into the ranking metrics used by the search engines in order to save users time when searching. For these reasons, several models such as LORI (Learning Object Review Instrument) have been proposed to evaluate Learning Object quality from a pedagogical perspective. However, no much effort has been put in defining and evaluating quality metrics based on those models. This paper proposes and evaluates a set of pedagogical quality metrics based on LORI. The work exposed in this paper shows that these metrics can be effectively and reliably used to provide quality-based sorting of search results. Besides, it strongly evidences that the evaluation of Learning Objects from a pedagogical perspective can notably enhance Learning Object search if suitable evaluations models and quality metrics are used. An evaluation of the LORI model is also described. Finally, all the presented metrics are compared and a discussion on their weaknesses and strengths is provided.
Resumo:
In many applications (like social or sensor networks) the in- formation generated can be represented as a continuous stream of RDF items, where each item describes an application event (social network post, sensor measurement, etc). In this paper we focus on compressing RDF streams. In particular, we propose an approach for lossless RDF stream compression, named RDSZ (RDF Differential Stream compressor based on Zlib). This approach takes advantage of the structural similarities among items in a stream by combining a differential item encoding mechanism with the general purpose stream compressor Zlib. Empirical evaluation using several RDF stream datasets shows that this combi- nation produces gains in compression ratios with respect to using Zlib alone.
Resumo:
LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.