971 resultados para Data Compression
Resumo:
Adiabatic compression testing of components in gaseous oxygen is a test method that is utilized worldwide and is commonly required to qualify a component for ignition tolerance under its intended service. This testing is required by many industry standards organizations and government agencies; however, a thorough evaluation of the test parameters and test system influences on the thermal energy produced during the test has not yet been performed. This paper presents a background for adiabatic compression testing and discusses an approach to estimating potential differences in the thermal profiles produced by different test laboratories. A “Thermal Profile Test Fixture” (TPTF) is described that is capable of measuring and characterizing the thermal energy for a typical pressure shock by any test system. The test systems at Wendell Hull & Associates, Inc. (WHA) in the USA and at the BAM Federal Institute for Materials Research and Testing in Germany are compared in this manner and some of the data obtained is presented. The paper also introduces a new way of comparing the test method to idealized processes to perform system-by-system comparisons. Thus, the paper introduces an “Idealized Severity Index” (ISI) of the thermal energy to characterize a rapid pressure surge. From the TPTF data a “Test Severity Index” (TSI) can also be calculated so that the thermal energies developed by different test systems can be compared to each other and to the ISI for the equivalent isentropic process. Finally, a “Service Severity Index” (SSI) is introduced to characterizing the thermal energy of actual service conditions. This paper is the second in a series of publications planned on the subject of adiabatic compression testing.
Resumo:
This paper presents an overview of our demonstration of a low-bandwidth, wireless camera network where image compression is undertaken at each node. We briefly introduce the Fleck hardware platform we have developed as well as describe the image compression algorithm which runs on individual nodes. The demo will show real-time image data coming back to base as individual camera nodes are added to the network. Copyright 2007 ACM.
Resumo:
Background: Up to 1% of adults will suffer from leg ulceration at some time. The majority of leg ulcers are venous in origin and are caused by high pressure in the veins due to blockage or weakness of the valves in the veins of the leg. Prevention and treatment of venous ulcers is aimed at reducing the pressure either by removing / repairing the veins, or by applying compression bandages / stockings to reduce the pressure in the veins. The vast majority of venous ulcers are healed using compression bandages. Once healed they often recur and so it is customary to continue applying compression in the form of bandages, tights, stockings or socks in order to prevent recurrence. Compression bandages or hosiery (tights, stockings, socks) are often applied for ulcer prevention. Objectives To assess the effects of compression hosiery (socks, stockings, tights) or bandages in preventing the recurrence of venous ulcers. To determine whether there is an optimum pressure/type of compression to prevent recurrence of venous ulcers. Search methods The searches for the review were first undertaken in 2000. For this update we searched the Cochrane Wounds Group Specialised Register (October 2007), The Cochrane Central Register of Controlled Trials (CENTRAL) - The Cochrane Library 2007 Issue 3, Ovid MEDLINE - 1950 to September Week 4 2007, Ovid EMBASE - 1980 to 2007 Week 40 and Ovid CINAHL - 1982 to October Week 1 2007. Selection criteria Randomised controlled trials evaluating compression bandages or hosiery for preventing venous leg ulcers. Data collection and analysis Data extraction and assessment of study quality were undertaken by two authors independently. Results No trials compared recurrence rates with and without compression. One trial (300 patients) compared high (UK Class 3) compression hosiery with moderate (UK Class 2) compression hosiery. A intention to treat analysis found no significant reduction in recurrence at five years follow up associated with high compression hosiery compared with moderate compression hosiery (relative risk of recurrence 0.82, 95% confidence interval 0.61 to 1.12). This analysis would tend to underestimate the effectiveness of the high compression hosiery because a significant proportion of people changed from high compression to medium compression hosiery. Compliance rates were significantly higher with medium compression than with high compression hosiery. One trial (166 patients) found no statistically significant difference in recurrence between two types of medium (UK Class 2) compression hosiery (relative risk of recurrence with Medi was 0.74, 95% confidence interval 0.45 to 1.2). Both trials reported that not wearing compression hosiery was strongly associated with ulcer recurrence and this is circumstantial evidence that compression reduces ulcer recurrence. No trials were found which evaluated compression bandages for preventing ulcer recurrence. Authors' conclusions No trials compared compression with vs no compression for prevention of ulcer recurrence. Not wearing compression was associated with recurrence in both studies identified in this review. This is circumstantial evidence of the benefit of compression in reducing recurrence. Recurrence rates may be lower in high compression hosiery than in medium compression hosiery and therefore patients should be offered the strongest compression with which they can comply. Further trials are needed to determine the effectiveness of hosiery prescribed in other settings, i.e. in the UK community, in countries other than the UK.
Resumo:
Aims To identify self-care activities undertaken and determine relationships between self-efficacy, depression, quality of life, social support and adherence to compression therapy in a sample of patients with chronic venous insufficiency. Background Up to 70% of venous leg ulcers recur after healing. Compression hosiery is a primary strategy to prevent recurrence, however, problems with adherence to this strategy are well documented and an improved understanding of how psychosocial factors influence patients with chronic venous insufficiency will help guide effective preventive strategies. Design Cross-sectional survey and retrospective medical record review. Method All patients previously diagnosed with a venous leg ulcer which healed between 12–36 months prior to the study were invited to participate. Data on health, psychosocial variables and self-care activities were obtained from a self-report survey and data on medical and previous ulcer history were obtained from medical records. Multiple linear regression modelling was used to determine the independent influences of psychosocial factors on adherence to compression therapy. Results In a sample of 122 participants, the most frequently identified self-care activities were application of topical skin treatments, wearing compression hosiery and covering legs to prevent trauma. Compression hosiery was worn for a median of 4 days/week (range 0–7). After adjustment for all variables and potential confounders in a multivariable regression model, wearing compression hosiery was found to be significantly positively associated with participants’ knowledge of the cause of their condition (p=0.002), higher self-efficacy scores (p=0.026) and lower depression scores (p=0.009). Conclusion In this sample, depression, self-efficacy and knowledge were found to be significantly related to adherence to compression therapy. Relevance to clinical practice These findings support the need to screen for and treat depression in this population. In addition, strategies to improve patient knowledge and self-efficacy may positively influence adherence to compression therapy.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
There are an increasing number of compression systems available for treatment of venous leg ulcers and limited evidence on the relative effectiveness of these systems. The purpose of this study was to conduct a randomised controlled trial to compare the effectiveness of a 4-layer compression bandage system with Class 3 compression hosiery on healing and quality of life in patients with venous leg ulcers. Data were collected from 103 participants on demographics, health, ulcer status, treatments, pain, depression and quality of life for 24 weeks. After 24 weeks, 86% of the 4-layer bandage group and 77% of the hosiery group were healed (p=0.24). Median time to healing for the bandage group was 10 weeks, in comparison to 14 weeks for the hosiery group (p=0.018). Cox proportional hazards regression found participants in the 4-layer system were 2.1 times (95% CI 1.2–3.5) more likely to heal than those in hosiery, while longer ulcer duration, larger ulcer area and higher depression scores significantly delayed healing. No differences between groups were found in quality of life or pain measures. Findings indicate these systems were equally effective in healing patients by 24 weeks, however a 4-layer system may produce a more rapid response.
Resumo:
Compression is desirable for network applications as it saves bandwidth; however, when data is compressed before being encrypted, the amount of compression leaks information about the amount of redundancy in the plaintext. This side channel has led to successful CRIME and BREACH attacks on web traffic protected by the Transport Layer Security (TLS) protocol. The general guidance in light of these attacks has been to disable compression, preserving confidentiality but sacrificing bandwidth. In this paper, we examine two techniques - heuristic separation of secrets and fixed-dictionary compression|for enabling compression while protecting high-value secrets, such as cookies, from attack. We model the security offered by these techniques and report on the amount of compressibility that they can achieve.
Resumo:
Aim This study assessed the association between compression use and changes in lymphoedema observed in women with breast cancer-related lymphoedema who completed a 12 week exercise intervention. Methods This work uses data collected from a 12 week exercise trial, whereby women were randomly allocated into either aerobic-based only (n=21) or resistance-based only (n=20) exercise. Compression use during the trial was at the participant’s discretion. Differences in lymphoedema (measured by L-Dex score and inter-limb circumference difference [%]) and associated symptoms between those who wore, and did not wear compression during the 12 week intervention were assessed. We also explored participants’ reasons surrounding compression during exercise. Results No significant interaction effect between time and compression use for lymphoedema was observed. There was no difference between groups over time in the number or severity of lymphoedema symptoms. Irrespective of compression use, there were trends for reductions in the proportion of women reporting severe symptoms, but lymphoedema status did not change. Individual reasons for the use of compression, or lack thereof, varied markedly. Conclusion Our findings demonstrated an absence of a positive or negative effect from compression use during exercise on lymphoedema. Current and previous findings suggest the clinical recommendation that garments must be worn during exercise is questionable, and its application requires an individualised approach.
Resumo:
Telecommunications network management is based on huge amounts of data that are continuously collected from elements and devices from all around the network. The data is monitored and analysed to provide information for decision making in all operation functions. Knowledge discovery and data mining methods can support fast-pace decision making in network operations. In this thesis, I analyse decision making on different levels of network operations. I identify the requirements decision-making sets for knowledge discovery and data mining tools and methods, and I study resources that are available to them. I then propose two methods for augmenting and applying frequent sets to support everyday decision making. The proposed methods are Comprehensive Log Compression for log data summarisation and Queryable Log Compression for semantic compression of log data. Finally I suggest a model for a continuous knowledge discovery process and outline how it can be implemented and integrated to the existing network operations infrastructure.
Resumo:
Classification of large datasets is a challenging task in Data Mining. In the current work, we propose a novel method that compresses the data and classifies the test data directly in its compressed form. The work forms a hybrid learning approach integrating the activities of data abstraction, frequent item generation, compression, classification and use of rough sets.
Resumo:
Classification of large datasets is a challenging task in Data Mining. In the current work, we propose a novel method that compresses the data and classifies the test data directly in its compressed form. The work forms a hybrid learning approach integrating the activities of data abstraction, frequent item generation, compression, classification and use of rough sets.
Resumo:
The use of binary fluid systems in thermally driven vapour absorption and mechanically driven vapour compression refrigeration and heatpump cycles has provided an impetus for obtaining experimental date on caloric properties of such fluid mixtures. However, direct measurements of these properties are somewhat scarce in spite of the calorimetric techniques described in the literature being quite adequate. Most of the design data are derived through calculations using theoretical models and vapour-liquid equilibrium data. This article addresses the choice of working fluids and the current status on the data availability vis-a-vis engineering applications. Particular emphasis is on organic working fluid pairs.
Resumo:
We propose to compress weighted graphs (networks), motivated by the observation that large networks of social, biological, or other relations can be complex to handle and visualize. In the process also known as graph simplication, nodes and (unweighted) edges are grouped to supernodes and superedges, respectively, to obtain a smaller graph. We propose models and algorithms for weighted graphs. The interpretation (i.e. decompression) of a compressed, weighted graph is that a pair of original nodes is connected by an edge if their supernodes are connected by one, and that the weight of an edge is approximated to be the weight of the superedge. The compression problem now consists of choosing supernodes, superedges, and superedge weights so that the approximation error is minimized while the amount of compression is maximized. In this paper, we formulate this task as the 'simple weighted graph compression problem'. We then propose a much wider class of tasks under the name of 'generalized weighted graph compression problem'. The generalized task extends the optimization to preserve longer-range connectivities between nodes, not just individual edge weights. We study the properties of these problems and propose a range of algorithms to solve them, with dierent balances between complexity and quality of the result. We evaluate the problems and algorithms experimentally on real networks. The results indicate that weighted graphs can be compressed efficiently with relatively little compression error.
Resumo:
Processing maps for hot working of stainless steel of type AISI 304L have been developed on the basis of the flow stress data generated by compression and torsion in the temperature range 600–1200 °C and strain rate range 0.1–100 s−1. The efficiency of power dissipation given by 2m/(m+1) where m is the strain rate sensitivity is plotted as a function of temperature and strain rate to obtain a processing map, which is interpreted on the basis of the Dynamic Materials Model. The maps obtained by compression as well as torsion exhibited a domain of dynamic recrystallization with its peak efficiency occurring at 1200 °C and 0.1 s−1. These are the optimum hot-working parameters which may be obtained by either of the test techniques. The peak efficiency for the dynamic recrystallization is apparently higher (64%) than that obtained in constant-true-strain-rate compression (41%) and the difference in explained on the basis of strain rate variations occurring across the section of solid torsion bar. A region of flow instability has occurred at lower temperatures (below 1000 °C) and higher strain rates (above 1 s−1) and is wider in torsion than in compression. To achieve complete microstructure control in a component, the state of stress will have to be considered.
Resumo:
Large external memory bandwidth requirement leads to increased system power dissipation and cost in video coding application. Majority of the external memory traffic in video encoder is due to reference data accesses. We describe a lossy reference frame compression technique that can be used in video coding with minimal impact on quality while significantly reducing power and bandwidth requirement. The low cost transformless compression technique uses lossy reference for motion estimation to reduce memory traffic, and lossless reference for motion compensation (MC) to avoid drift. Thus, it is compatible with all existing video standards. We calculate the quantization error bound and show that by storing quantization error separately, bandwidth overhead due to MC can be reduced significantly. The technique meets key requirements specific to the video encode application. 24-39% reduction in peak bandwidth and 23-31% reduction in total average power consumption are observed for IBBP sequences.