999 resultados para compression function
Resumo:
A new cryptographic hash function Whirlwind is presented. We give the full specification and explain the design rationale. We show how the hash function can be implemented efficiently in software and give first performance numbers. A detailed analysis of the security against state-of-the-art cryptanalysis methods is also provided. In comparison to the algorithms submitted to the SHA-3 competition, Whirlwind takes recent developments in cryptanalysis into account by design. Even though software performance is not outstanding, it compares favourably with the 512-bit versions of SHA-3 candidates such as LANE or the original CubeHash proposal and is about on par with ECHO and MD6.
Resumo:
AMS Subj. Classification: Primary 20N05, Secondary 94A60
Resumo:
Recurrent iterated function systems (RIFSs) are improvements of iterated function systems (IFSs) using elements of the theory of Marcovian stochastic processes which can produce more natural looking images. We construct new RIFSs consisting substantially of a vertical contraction factor function and nonlinear transformations. These RIFSs are applied to image compression.
Resumo:
We present a review of perceptual image quality metrics and their application to still image compression. The review describes how image quality metrics can be used to guide an image compression scheme and outlines the advantages, disadvantages and limitations of a number of quality metrics. We examine a broad range of metrics ranging from simple mathematical measures to those which incorporate full perceptual models. We highlight some variation in the models for luminance adaptation and the contrast sensitivity function and discuss what appears to be a lack of a general consensus regarding the models which best describe contrast masking and error summation. We identify how the various perceptual components have been incorporated in quality metrics, and identify a number of psychophysical testing techniques that can be used to validate the metrics. We conclude by illustrating some of the issues discussed throughout the paper with a simple demonstration. (C) 1998 Elsevier Science B.V. All rights reserved.
Resumo:
Our aim was to analyze the influence of subtle cochlear damage on temporal auditory resolution in tinnitus patients. Forty-eight subjects (hearing threshold <= 25 dB HL) were assigned to one of two experimental groups: 28 without auditory complaints (mean age, 28.8 years) and 20 with tinnitus (mean age, 33.5 years). We analyzed distortion product otoacoustic emission growth functions (by threshold, slope, and estimated amplitude), extended high-frequency thresholds, and the Gaps-in-Noise test. There were differences between the groups, principally in the extended high-frequency thresholds and the Gaps-in-Noise test results. Our findings suggest that subtle peripheral hearing impairment affects temporal resolution in tinnitus, even when pure-tone thresholds as conventionally measured appear normal. Copyright (C) 2010 S. Karger AG, Basel
Resumo:
PURPOSE. To evaluate the relationship between pattern electroretinogram (PERG) amplitude, macular and retinal nerve fiber layer (RNFL) thickness by optical coherence tomography (OCT), and visual field (VF) loss on standard automated perimetry (SAP) in eyes with temporal hemianopia from chiasmal compression. METHODS. Forty-one eyes from 41 patients with permanent temporal VF defects from chiasmal compression and 41 healthy subjects underwent transient full-field and hemifield (temporal or nasal) stimulation PERG, SAP and time domain-OCT macular and RNFL thickness measurements. Comparisons were made using Student`s t-test. Deviation from normal VF sensitivity for the central 18 of VF was expressed in 1/Lambert units. Correlations between measurements were verified by linear regression analysis. RESULTS. PERG and OCT measurements were significantly lower in eyes with temporal hemianopia than in normal eyes. A significant correlation was found between VF sensitivity loss and fullfield or nasal, but not temporal, hemifield PERG amplitude. Likewise a significant correlation was found between VF sensitivity loss and most OCT parameters. No significant correlation was observed between OCT and PERG parameters, except for nasal hemifield amplitude. A significant correlation was observed between several macular and RNFL thickness parameters. CONCLUSIONS. In patients with chiasmal compression, PERG amplitude and OCT thickness measurements were significant related to VF loss, but not to each other. OCT and PERG quantify neuronal loss differently, but both technologies are useful in understanding structure-function relationship in patients with chiasmal compression. (ClinicalTrials.gov number, NCT00553761.) (Invest Ophthalmol Vis Sci. 2009; 50: 3535-3541) DOI:10.1167/iovs.08-3093
Resumo:
Objective. Endothelial impairment evaluation by sonographic measurement of flow-mediated dilatation (FMD) has become broadly used. However, this method has 2 main caveats: the dilatation depends on the baseline arterial diameter, and a high precision level is required. Vasodilatation leads to an amplified fall in impedance. We hypothesized that assessment of the pulsatility index change (PI-C) 1 minute after 5-minute forearm compression might evaluate that fall in impedance. The aim of this study was to compare the PI-C with FMD. Methods. Flow-mediated dilatation and the PI-C were assessed in 51 healthy women aged between 35.1 and 67.1 years. We correlated both FMD and the PI-C with age, body mass index, waist circumference, cholesterol level, high-density lipoprotein level, glucose level, systolic and diastolic blood pressure, pulse pressure, brachial artery diameter, simplified Framingham score, intima-media thickness, and carotid stiffness index. Intraclass correlation coefficients between 2 FMD and PI-C measurements were also examined. Results. Only FMD correlated with baseline brachial diameter (r=-0.53). The PI-C had a high correlation with age, body mass index, waist circumference, cholesterol level, systolic blood pressure, pulse pressure, simplified Framingham score, and intima-media thickness. The correlation between FMD and the PI-C was high (r=-0.66). The PI-C had a higher intraclass correlation coefficient (0.991) than FMD (0.836) but not brachial artery diameter (0.989). Conclusions. The PI-C had a large correlation with various markers of cardiovascular risk. Additionally, PI-C measurement does not require offline analysis, extra software, or electrocardiography We think that the PI-C could be considered a marker of endothelial function. However, more studies are required before further conclusions.
Resumo:
It has been argued that power-law time-to-failure fits for cumulative Benioff strain and an evolution in size-frequency statistics in the lead-up to large earthquakes are evidence that the crust behaves as a Critical Point (CP) system. If so, intermediate-term earthquake prediction is possible. However, this hypothesis has not been proven. If the crust does behave as a CP system, stress correlation lengths should grow in the lead-up to large events through the action of small to moderate ruptures and drop sharply once a large event occurs. However this evolution in stress correlation lengths cannot be observed directly. Here we show, using the lattice solid model to describe discontinuous elasto-dynamic systems subjected to shear and compression, that it is for possible correlation lengths to exhibit CP-type evolution. In the case of a granular system subjected to shear, this evolution occurs in the lead-up to the largest event and is accompanied by an increasing rate of moderate-sized events and power-law acceleration of Benioff strain release. In the case of an intact sample system subjected to compression, the evolution occurs only after a mature fracture system has developed. The results support the existence of a physical mechanism for intermediate-term earthquake forecasting and suggest this mechanism is fault-system dependent. This offers an explanation of why accelerating Benioff strain release is not observed prior to all large earthquakes. The results prove the existence of an underlying evolution in discontinuous elasto-dynamic, systems which is capable of providing a basis for forecasting catastrophic failure and earthquakes.
Resumo:
Objectives: Existing VADs are single-ventricle pumps needing anticoagulation. We developed a bi-ventricular external assist device that partially reproduces the physiological muscle function of the heart. This artificial muscle could wrap the heart and improve its contractile force.Methods: The device has a carbon fiber skeleton fitting a 30-40kg patient's heart, to which a Nitinol based artificial muscle is connected. The artificial muscle wraps both ventricles. The Nitinol fibers are woven on a Kevlar mesh surrounding each ventricle. The fibers are electrically driven with a dedicated control unit developed for this purpose. We assessed hemodynamic performances of this device using a previously described dedicated bench test. Volume ejected and pressure gradient have been measured with afterload ranging from 10 to 50mmHg.Results: With an afterload of 50mmHg the system has an ejection fraction of 4% on the right side and 5% on the left side. The system is able to generate a systolic ejection of 2.2mL on the right side and 3.25mL on the left side. With an afterload of 25mmHg the results are reduced of about 20%. The activation frequency can reach 80/minute resulting in a total volume displacement of 176mL/minute on the right side and 260mL/minute on the left side.Conclusions: These preliminary studies confirmed the possibility of improving the ejection fraction of a failing heart using artificial muscle for external cardiac compression avoiding anticoagulation therapy. This device could be helpful in weaning cardio-pulmonary bypass and/or for short-term cardio-circulatory support in pediatric population with cardiac failure.
Resumo:
Estimation of soil load-bearing capacity from mathematical models that relate preconsolidation pressure (σp) to mechanical resistance to penetration (PR) and gravimetric soil water content (U) is important for defining strategies to prevent compaction of agricultural soils. Our objective was therefore to model the σp and compression index (CI) according to the PR (with an impact penetrometer in the field and a static penetrometer inserted at a constant rate in the laboratory) and U in a Rhodic Eutrudox. The experiment consisted of six treatments: no-tillage system (NT); NT with chiseling; and NT with additional compaction by combine traffic (passing 4, 8, 10, and 20 times). Soil bulk density, total porosity, PR (in field and laboratory measurements), U, σp, and CI values were determined in the 5.5-10.5 cm and 13.5-18.5 cm layers. Preconsolidation pressure (σp) and CI were modeled according to PR in different U. The σp increased and the CI decreased linearly with increases in the PR values. The correlations between σp and PR and PR and CI are influenced by U. From these correlations, the soil load-bearing capacity and compaction susceptibility can be estimated by PR readings evaluated in different U.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
The role of airway inflammation in ventilated preterm newborns and the risk factors associated with the development of chronic lung disease are not well understood. Our objective was to analyze the association of the airway inflammatory response in ventilated preterm infants by serial measurements of TNF-a and IL-10 in tracheobronchial lavage (TBL) with perinatal factors and lung function measured early in life. A series of TBL samples were collected from ventilated preterm infants (less than 32 weeks of gestational age) and concentrations of TNF-a and IL-10 were measured by ELISA. Pulmonary function tests were performed after discharge by the raised volume rapid compression technique. Twenty-five subjects were recruited and 70 TBL samples were obtained. There was a significant positive association between TNF-a and IL-10 levels and length of time between the rupture of the amniotic membranes and delivery (r = 0.65, P = 0.002, and r = 0.57, P < 0.001, respectively). Lung function was measured between 1 and 22 weeks of corrected age in 10 patients. Multivariable analysis with adjustment for differences in lung volume showed a significant negative association between TNF-a levels and forced expiratory flow (FEF50; r = -0.6; P = 0.04), FEF75 (r = -0.76; P = 0.02), FEF85 (r = -0.75; P = 0.03), FEF25-75 (-0.71; P = 0.02), and FEV0.5 (r = -0.39; P = 0.03). These data suggest that TNF-a levels in the airways during the first days of life were associated with subsequent lung function abnormalities measured weeks or months later.
Resumo:
This paper describes JERIM-320, a new 320-bit hash function used for ensuring message integrity and details a comparison with popular hash functions of similar design. JERIM-320 and FORK -256 operate on four parallel lines of message processing while RIPEMD-320 operates on two parallel lines. Popular hash functions like MD5 and SHA-1 use serial successive iteration for designing compression functions and hence are less secure. The parallel branches help JERIM-320 to achieve higher level of security using multiple iterations and processing on the message blocks. The focus of this work is to prove the ability of JERIM 320 in ensuring the integrity of messages to a higher degree to suit the fast growing internet applications
Resumo:
This work proposes a parallel genetic algorithm for compressing scanned document images. A fitness function is designed with Hausdorff distance which determines the terminating condition. The algorithm helps to locate the text lines. A greater compression ratio has achieved with lesser distortion
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.