12 resultados para Data compression

em Deakin Research Online - Australia


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Traditional data compression algorithms for 2D images work using the information theoretic paradigm, attempting to reduce redundant information by as much as possible. However, through the use of a depletion algorithm that takes advantage of characteristics of the human visual system, images can be displayed using only half or a quarter of the original information with no appreciable loss of quality.

The characteristics of the human visual system that allows the viewer to perceive a higher rate of information than is actually displayed is known as the beta or picket fence effect. It is called the picket fence effect because its effect is noticeable when a person is travelling along a picket fence. Despite the person not having an unimpeded view of the objects behind the fence at any instant, as the person is moving, the objects behind the picket fence are clearly visible. In fact, in most cases the fence is hardly noticeable at all.

The techniques we have developed uses this effect to achieve higher levels of compression than would otherwise be possible. As a fundamental characteristic of the beta effect is the requirement that there is movement of the fence in relation to the object, the beta effect can only be used in image sequences where movement between the depletion pattern and objects within the image can be achieved.

As MPEG is the recognised standard by which image sequences are coded, compatibility with MPEG is essential. We have modified our technique such that it performs in conjunction with MPEG, providing further compression over MPEG.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A hybrid neural network model, based on the fusion of fuzzy adaptive resonance theory (FA ART) and the general regression neural network (GRNN), is proposed in this paper. Both FA and the GRNN are incremental learning systems and are very fast in network training. The proposed hybrid model, denoted as GRNNFA, is able to retain these advantages and, at the same time, to reduce the computational requirements in calculating and storing information of the kernels. A clustering version of the GRNN is designed with data compression by FA for noise removal. An adaptive gradient-based kernel width optimization algorithm has also been devised. Convergence of the gradient descent algorithm can be accelerated by the geometric incremental growth of the updating factor. A series of experiments with four benchmark datasets have been conducted to assess and compare effectiveness of GRNNFA with other approaches. The GRNNFA model is also employed in a novel application task for predicting the evacuation time of patrons at typical karaoke centers in Hong Kong in the event of fire. The results positively demonstrate the applicability of GRNNFA in noisy data regression problems.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper proposes a novel hierarchical data fusion technique for the non-destructive testing (NDT) and condition assessment of timber utility poles. The new method analyzes stress wave data from multisensor and multiexcitation guided wave testing using a hierarchical data fusion model consisting of feature extraction, data compression, pattern recognition, and decision fusion algorithms. The researchers validate the proposed technique using guided wave tests of a sample of in situ timber poles. The actual health states of these poles are known from autopsies conducted after the testing, forming a ground-truth for supervised classification. In the proposed method, a data fusion level extracts the main features from the sampled stress wave signals using power spectrum density (PSD) estimation, wavelet packet transform (WPT), and empirical mode decomposition (EMD). These features are then compiled to a feature vector via real-number encoding and sent to the next level for further processing. Principal component analysis (PCA) is also adopted for feature compression and to minimize information redundancy and noise interference. In the feature fusion level, two classifiers based on support vector machine (SVM) are applied to sensor separated data of the two excitation types and the pole condition is identified. In the decision making fusion level, the Dempster–Shafer (D-S) evidence theory is employed to integrate the results from the individual sensors obtaining a final decision. The results of the in situ timber pole testing show that the proposed hierarchical data fusion model was able to distinguish between healthy and faulty poles, demonstrating the effectiveness of the new method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modelling. We use improvement in access time as the performance metric, for which we derive a formula in terms of resource parameters (time available and time required for prefetching) and speculative parameters (probabilities for next access). The performance maximization problem is expressed as a stretch knapsack problem. We develop an algorithm to maximize the improvement in access time by solving the stretch knapsack problem, using theoretically proven apparatus to reduce the search space. Integration between speculative prefetching and caching is also investigated, albeit under the assumption of equal item sizes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a two-stage algorithm for vector quantization is proposed based on a self-organizing map (SOM) neural network. First, a conventional self-organizing map is modified to deal with dead codebooks in the learning process and is then used to obtain the codebook distribution structure for a given set of input data. Next, subblocks are classified based on the previous structure distribution with a prior criteria. Then, the conventional LBG algorithm is applied to these sub-blocks for data classification with initial values obtained via the SOM. Finally, extensive simulations illustrate that the proposed two-stage algorithm is very effective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Z-pinning is a newly developed technique to enhance the strength of composite laminates in the thickness direction. Recent experimental and theoretical studies have shown that z-pins significantly improve mode I and mode II fracture toughness. In practice, buckling accompanying delamination is a typical failure mode in laminated composite structures. For a complete understanding of the z-pinning technique towards improvements of the overall mechanical properties of laminated composites, a numerical model is developed in this paper to investigate the influence of z-pins on the buckling composite laminates with initial delaminations under edge-wise compression. The numerical results indicate that z-pinning can indeed effectively increase the compressive strength of the composite laminates provided that the initial imperfection is within a certain range. The magnitude of the improvement is consistent with available experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective To determine whether vertebroplasty is more effective than placebo for patients with pain of recent onset (≤6 weeks) or severe pain (score ≥8 on 0-10 numerical rating scale).

Design Meta-analysis of combined individual patient level data.

Setting Two multicentred randomised controlled trials of vertebroplasty; one based in Australia, the other in the United States.

Participants 209 participants (Australian trial n=78, US trial n=131) with at least one radiographically confirmed vertebral compression fracture. 57 (27%) participants had pain of recent onset (vertebroplasty n=25, placebo n=32) and 99 (47%) had severe pain at baseline (vertebroplasty n=50, placebo n=49).

Intervention Percutaneous vertebroplasty versus a placebo procedure.

Main outcome measure Scores for pain (0-10 scale) and function (modified, 23 item Roland-Morris disability questionnaire) at one month.

Results For participants with pain of recent onset, between group differences in mean change scores at one month for pain and disability were 0.1 (95% confidence interval −1.4 to 1.6) and 0.2 (−3.0 to 3.4), respectively. For participants with severe pain at baseline, between group differences for pain and disability scores at one month were 0.3 (−0.8 to 1.5) and 1.4 (−1.2 to 3.9), respectively. At one month those in the vertebroplasty group were more likely to be using opioids.

Conclusions Individual patient data meta-analysis from two blinded trials of vertebroplasty, powered for subgroup analyses, failed to show an advantage of vertebroplasty over placebo for participants with recent onset fracture or severe pain. These results do not support the hypothesis that selected subgroups would benefit from vertebroplasty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compressed sensing (CS) is a new information sampling theory for acquiring sparse or compressible data with much fewer measurements than those otherwise required by the Nyquist/Shannon counterpart. This is particularly important for some imaging applications such as magnetic resonance imaging or in astronomy. However, in the existing CS formulation, the use of the â„“ 2 norm on the residuals is not particularly efficient when the noise is impulsive. This could lead to an increase in the upper bound of the recovery error. To address this problem, we consider a robust formulation for CS to suppress outliers in the residuals. We propose an iterative algorithm for solving the robust CS problem that exploits the power of existing CS solvers. We also show that the upper bound on the recovery error in the case of non-Gaussian noise is reduced and then demonstrate the efficacy of the method through numerical studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, fields with substantial computing requirementshave turned to cloud computing for economical, scalable, and on-demandprovisioning of required execution environments. However, current cloudofferings focus on providing individual servers while tasks such as applicationdistribution and data preparation are left to cloud users. This article presents anew form of cloud called HPC Hybrid Deakin (H2D) cloud; an experimentalhybrid cloud capable of utilising both local and remote computational servicesfor large embarrassingly parallel applications. As well as supporting execution,H2D also provides a new service, called DataVault, that provides transparentdata management services so all cloud-hosted clusters have required datasetsbefore commencing execution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ust-Noticeable-Differences (JND) as a dead-band in perceptual analysis has been widely used for more than a decade. This technique has been employed for data reduction in hap tic data transmission systems by several researchers. In fact, researchers use two different JND coefficients that are JNDV and JNDF for velocity and force data respectively. For position data, they usually rely on the resolution of hap tic display device to omit data that are unperceivable to human. In this paper, pruning undesirable position data that are produced by the vibration of the device or subject and/or noise in transmission line is addressed. It is shown that using inverse JNDV for position data can prune undesirable position data. Comparison of the results of the proposed method in this paper with several well known filters and some available methods proposed by other researchers is performed. It is shown that combination of JNDV could provide lower error with desirable curve smoothness, and as little as possible computation effort and complexity. It also has been shown that this method reduces much more data rather than using forward-JNDV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of semiconductor process and EDA tools technology, IC designers can integrate more functions. However, to reduce the demand of time-to-market and tackle the increasing complexity of SoC, the need of fast prototyping and testing is growing. Taking advantage of deep submicron technology, modern FPGAs provide a fast and low-cost prototyping with large logic resources and high performance. So the hardware is mapped onto an emulation platform based on FPGA that mimics the behaviour of SOC. In this paper we use FPGA as a system on chip which is then used for image compression by 2-D DCT respectively and proposed SoC for image compression using soft core Microblaze. The JPEG standard defines compression techniques for image data. As a consequence, it allows to store and transfer image data with considerably reduced demand for storage space and bandwidth. From the four processes provided in the JPEG standard, only one, the baseline process is widely used. Proposed SoC for JPEG compression has been implemented on FPGA Spartan-6 SP605 evaluation board using Xilinx platform studio, because field programmable gate array have reconfigurable hardware architecture. Hence the JPEG image with high speed and reduced size can be obtained at low risk and low power consumption of about 0.699W. The proposed SoC for image compression is evaluated at 83.33MHz on Xilinx Spartan-6 FPGA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a novel anomaly detection framework for multiple heterogeneous yet correlated time series, such as the medical surveillance series data. In our framework, we propose an anomaly detection algorithm from the viewpoint of trend and correlation analysis. Moreover, to efficiently process huge amount of observed time series, a new clustering-based compression method is proposed. Experimental results indicate that our framework is more effective and efficient than its peers. © 2012 Springer-Verlag Berlin Heidelberg.