922 resultados para Code compression
Resumo:
In the last years, the importance of locating people and objects and communicating with them in real time has become a common occurrence in every day life. Nowadays, the state of the art of location systems for indoor environments has not a dominant technology as instead occurs in location systems for outdoor environments, where GPS is the dominant technology. In fact, each location technology for indoor environments presents a set of features that do not allow their use in the overall application scenarios, but due its characteristics, it can well coexist with other similar technologies, without being dominant and more adopted than the others indoor location systems. In this context, the European project SELECT studies the opportunity of collecting all these different features in an innovative system which can be used in a large number of application scenarios. The goal of this project is to realize a wireless system, where a network of fixed readers able to query one or more tags attached to objects to be located. The SELECT consortium is composed of European institutions and companies, including Datalogic S.p.A. and CNIT, which deal with software and firmware development of the baseband receiving section of the readers, whose function is to acquire and process the information received from generic tagged objects. Since the SELECT project has an highly innovative content, one of the key stages of the system design is represented by the debug phase. This work aims to study and develop tools and techniques that allow to perform the debug phase of the firmware of the baseband receiving section of the readers.
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
Resumo:
With the coupled use of multibeam swath bathymetry, high-resolution subbottom profiling and sediment coring from icebreakers in the Arctic Ocean, there is a growing awareness of the prevalence of Quaternary ice-grounding events on many of the topographic highs found in present water depths of <1000 m. In some regions, such as the Lomonosov Ridge and Yermak Plateau, overconsolidated sediments sampled through either drilling or coring are found beneath seismically imaged unconformities of glacigenic origin. However, there exists no comprehensive analysis of the geotechnical properties of these sediments, or how their inferred stress state may be related to different glacigenic processes or types of ice-loading. Here we combine geophysical, stratigraphic and geotechnical measurements from the Lomonosov Ridge and Yermak Plateau and discuss the glacial geological implications of overconsolidated sediments. The degree of overconsolidation, determined from measurements of porosity and shear strength, is shown to result from consolidation and/or deformation below grounded ice and, with the exception of a single region on the Lomonosov Ridge, cannot be explained by erosion of overlying sediments. We demonstrate that the amount and depth of porosity loss associated with a middle Quaternary (~ 790-950 thousand years ago - ka) grounding on the Yermak Plateau is compatible with sediment consolidation under an ice sheet or ice rise. Conversely, geotechnical properties of sediments from beneath late Quaternary ice-groundings in both regions, independently dated to Marine Isotope Stage (MIS) 6, indicate a more transient event commensurate with a passing tabular iceberg calved from an ice shelf.
Resumo:
The cyclic compression of several granular systems has been simulated with a molecular dynamics code. All the samples consisted of bidimensional, soft, frictionless and equal-sized particles that were initially arranged according to a squared lattice and were compressed by randomly generated irregular walls. The compression protocols can be described by some control variables (volume or external force acting on the walls) and by some dimensionless factors, that relate stiffness, density, diameter, damping ratio and water surface tension to the external forces, displacements and periods. Each protocol, that is associated to a dynamic process, results in an arrangement with its own macroscopic features: volume (or packing ratio), coordination number, and stress; and the differences between packings can be highly significant. The statistical distribution of the force-moment state of the particles (i.e. the equivalent average stress multiplied by the volume) is analyzed. In spite of the lack of a theoretical framework based on statistical mechanics specific for these protocols, it is shown how the obtained distributions of mean and relative deviatoric force-moment are. Then it is discussed on the nature of these distributions and on their relation to specific protocols.
Resumo:
El esquema actual que existe en el ámbito de la normalización y el diseño de nuevos estándares de codificación de vídeo se está convirtiendo en una tarea difícil de satisfacer la evolución y dinamismo de la comunidad de codificación de vídeo. El problema estaba centrado principalmente en poder explotar todas las características y similitudes entre los diferentes códecs y estándares de codificación. Esto ha obligado a tener que rediseñar algunas partes comunes a varios estándares de codificación. Este problema originó la aparición de una nueva iniciativa de normalización dentro del comité ISO/IEC MPEG, llamado Reconfigurable Video Coding (RVC). Su principal idea era desarrollar un estándar de codificación de vídeo que actualizase e incrementase progresivamente una biblioteca de los componentes, aportando flexibilidad y la capacidad de tener un código reconfigurable mediante el uso de un nuevo lenguaje orientado a flujo de Actores/datos denominado CAL. Este lenguaje se usa para la especificación de la biblioteca estándar y para la creación de instancias del modelo del decodificador. Más tarde, se desarrolló un nuevo estándar de codificación de vídeo denominado High Efficiency Video Coding (HEVC), que actualmente se encuentra en continuo proceso de actualización y desarrollo, que mejorase la eficiencia y compresión de la codificación de vídeo. Obviamente se ha desarrollado una visión de HEVC empleando la metodología de RVC. En este PFC, se emplean diferentes implementaciones de estándares empleando RVC. Por ejemplo mediante los decodificadores Mpeg 4 Part 2 SP y Mpeg 4 Part 10 CBP y PHP así como del nuevo estándar de codificación HEVC, resaltando las características y utilidad de cada uno de ellos. En RVC los algoritmos se describen mediante una clase de actores que intercambian flujos de datos (tokens) para realizar diferentes acciones. El objetivo de este proyecto es desarrollar un programa que, partiendo de los decodificadores anteriormente mencionados, una serie de secuencia de vídeo en diferentes formatos de compresión y una distribución estándar de los actores (para cada uno de los decodificadores), sea capaz de generar diferentes distribuciones de los actores del decodificador sobre uno o varios procesadores del sistema sobre el que se ejecuta, para conseguir la mayor eficiencia en la codificación del vídeo. La finalidad del programa desarrollado en este proyecto es la de facilitar la realización de las distribuciones de los actores sobre los núcleos del sistema, y obtener las mejores configuraciones posibles de una manera automática y eficiente. ABSTRACT. The current scheme that exists in the field of standardization and the design of new video coding standards is becoming a difficult task to meet the evolving and dynamic community of video encoding. The problem was centered mainly in order to exploit all the features and similarities between different codecs and encoding standards. This has forced redesigning some parts common to several coding standards. This problem led to the emergence of a new initiative for standardization within the ISO / IEC MPEG committee, called Reconfigurable Video Coding (RVC). His main idea was to develop a video coding standard and gradually incrementase to update a library of components, providing flexibility and the ability to have a reconfigurable code using a new flow -oriented language Actors / data called CAL. This language is used for the specification of the standard library and to the instantiation model decoder. Later, a new video coding standard called High Efficiency Video Coding (HEVC), which currently is in continuous process of updating and development, which would improve the compression efficiency and video coding is developed. Obviously has developed a vision of using the methodology HEVC RVC. In this PFC, different implementations using RVC standard are used. For example, using decoders MPEG 4 Part 2 SP and MPEG 4 Part 10 CBP and PHP and the new coding standard HEVC, highlighting the features and usefulness of each. In RVC, the algorithms are described by a class of actors that exchange streams of data (tokens) to perform different actions. The objective of this project is to develop a program that, based on the aforementioned decoders, a series of video stream in different compression formats and a standard distribution of actors (for each of the decoders), is capable of generating different distributions decoder actors on one or more processors of the system on which it runs, to achieve greater efficiency in video coding. The purpose of the program developed in this project is to facilitate the realization of the distributions of the actors on the cores of the system, and get the best possible settings automatically and efficiently.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
In the area of dry particle breakage, Discrete Element Method (DEM) simulations have been widely used to analyse the sensitivity of various physical parameters to the behaviour of agglomerates during breakage. This paper looks at the effect of agglomerate shape and structure on the mechanisms and extent of breakage of dry agglomerates under compressive load using DEM simulations. In the simulations, a spherical-shaped agglomerate produced within the DEM code is compared with an irregularly shaped agglomerate, whose structure is that of an actual granule that was characterised with X-ray microtomography (muCT). Both agglomerates have identical particle size distribution, coordination number and surface energy values, with only the agglomerate shape and structure differing between the two. The work here details the breakage behaviour with a number of traditional DEM output parameters (i.e., contact/cluster distributions) with showing vastly different behaviour between the two agglomerates. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The focus of this thesis is placed on text data compression based on the fundamental coding scheme referred to as the American Standard Code for Information Interchange or ASCII. The research objective is the development of software algorithms that result in significant compression of text data. Past and current compression techniques have been thoroughly reviewed to ensure proper contrast between the compression results of the proposed technique with those of existing ones. The research problem is based on the need to achieve higher compression of text files in order to save valuable memory space and increase the transmission rate of these text files. It was deemed necessary that the compression algorithm to be developed would have to be effective even for small files and be able to contend with uncommon words as they are dynamically included in the dictionary once they are encountered. A critical design aspect of this compression technique is its compatibility to existing compression techniques. In other words, the developed algorithm can be used in conjunction with existing techniques to yield even higher compression ratios. This thesis demonstrates such capabilities and such outcomes, and the research objective of achieving higher compression ratio is attained.
Resumo:
OBJECTIVES: The complexity and heterogeneity of human bone, as well as ethical issues, most always hinder the performance of clinical trials. Thus, in vitro studies become an important source of information for the understanding of biomechanical events on implant-supported prostheses, although study results cannot be considered reliable unless validation studies are conducted. The purpose of this work was to validate an artificial experimental model based on its modulus of elasticity, to simulate the performance of human bone in vivo in biomechanical studies of implant-supported prostheses. MATERIAL AND METHODS: In this study, fast-curing polyurethane (F16 polyurethane, Axson) was used to build 40 specimens that were divided into five groups. The following reagent ratios (part A/part B) were used: Group A (0.5/1.0), Group B (0.8/1.0), Group C (1.0/1.0), Group D (1.2/1.0), and Group E (1.5/1.0). A universal testing machine (Kratos model K - 2000 MP) was used to measure modulus of elasticity values by compression. RESULTS: Mean modulus of elasticity values were: Group A - 389.72 MPa, Group B - 529.19 MPa, Group C - 571.11 MPa, Group D - 470.35 MPa, Group E - 437.36 MPa. CONCLUSION: The best mechanical characteristics and modulus of elasticity value comparable to that of human trabecular bone were obtained when A/B ratio was 1:1.