914 resultados para predictive coding
Resumo:
A study on heat pump thermodynamic characteristics has been made in the laboratory on a specially designed and instrumented air to water heat pump system. The design, using refrigerant R12, was based on the requirement to produce domestic hot water at a temperature of about 50 °C and was assembled in the laboratory. All the experimental data were fed to a microcomputer and stored on disk automatically from appropriate transducers via amplifier and 16 channel analogue to digital converters. The measurements taken were R12 pressures and temperatures, water and R12 mass flow rates, air speed, fan and compressor input powers, water and air inlet and outlet temperatures, wet and dry bulb temperatures. The time interval between the observations could be varied. The results showed, as expected, that the COP was higher at higher air inlet temperatures and at lower hot water output temperatures. The optimum air speed was found to be at a speed when the fan input power was about 4% of the condenser heat output. It was also found that the hot water can be produced at a temperature higher than the appropriate R12 condensing temperature corresponding to condensing pressure. This was achieved by condenser design to take advantage of discharge superheat and by further heating the water using heat recovery from the compressor. Of the input power to the compressor, typically about 85% was transferred to the refrigerant, 50 % by the compression work and 35% due to the heating of the refrigerant by the cylinder wall, and the remaining 15% (of the input power) was rejected to the cooling medium. The evaporator effectiveness was found to be about 75% and sensitive to the air speed. Using the data collected, a steady state computer model was developed. For given input conditions s air inlet temperature, air speed, the degree of suction superheat , water inlet and outlet temperatures; the model is capable of predicting the refrigerant cycle, compressor efficiency, evaporator effectiveness, condenser water flow rate and system Cop.
Resumo:
This thesis introduces and develops a novel real-time predictive maintenance system to estimate the machine system parameters using the motion current signature. Recently, motion current signature analysis has been addressed as an alternative to the use of sensors for monitoring internal faults of a motor. A maintenance system based upon the analysis of motion current signature avoids the need for the implementation and maintenance of expensive motion sensing technology. By developing nonlinear dynamical analysis for motion current signature, the research described in this thesis implements a novel real-time predictive maintenance system for current and future manufacturing machine systems. A crucial concept underpinning this project is that the motion current signature contains information relating to the machine system parameters and that this information can be extracted using nonlinear mapping techniques, such as neural networks. Towards this end, a proof of concept procedure is performed, which substantiates this concept. A simulation model, TuneLearn, is developed to simulate the large amount of training data required by the neural network approach. Statistical validation and verification of the model is performed to ascertain confidence in the simulated motion current signature. Validation experiment concludes that, although, the simulation model generates a good macro-dynamical mapping of the motion current signature, it fails to accurately map the micro-dynamical structure due to the lack of knowledge regarding performance of higher order and nonlinear factors, such as backlash and compliance. Failure of the simulation model to determine the micro-dynamical structure suggests the presence of nonlinearity in the motion current signature. This motivated us to perform surrogate data testing for nonlinearity in the motion current signature. Results confirm the presence of nonlinearity in the motion current signature, thereby, motivating the use of nonlinear techniques for further analysis. Outcomes of the experiment show that nonlinear noise reduction combined with the linear reverse algorithm offers precise machine system parameter estimation using the motion current signature for the implementation of the real-time predictive maintenance system. Finally, a linear reverse algorithm, BJEST, is developed and applied to the motion current signature to estimate the machine system parameters.
Resumo:
We present and evaluate a novel idea for scalable lossy colour image coding with Matching Pursuit (MP) performed in a transform domain. The idea is to exploit correlations in RGB colour space between image subbands after wavelet transformation rather than in the spatial domain. We propose a simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE) which can achieve comparable performance to JPEG 2000 even though the latter utilises careful data modelling at the coding stage. Thus, the obtained image representation has the potential to outperform JPEG 2000 with a more sophisticated coding algorithm.
Resumo:
This thesis presents a study of how edges are detected and encoded by the human visual system. The study begins with theoretical work on the development of a model of edge processing, and includes psychophysical experiments on humans, and computer simulations of these experiments, using the model. The first chapter reviews the literature on edge processing in biological and machine vision, and introduces the mathematical foundations of this area of research. The second chapter gives a formal presentation of a model of edge perception that detects edges and characterizes their blur, contrast and orientation, using Gaussian derivative templates. This model has previously been shown to accurately predict human performance in blur matching tasks with several different types of edge profile. The model provides veridical estimates of the blur and contrast of edges that have a Gaussian integral profile. Since blur and contrast are independent parameters of Gaussian edges, the model predicts that varying one parameter should not affect perception of the other. Psychophysical experiments showed that this prediction is incorrect: reducing the contrast makes an edge look sharper; increasing the blur reduces the perceived contrast. Both of these effects can be explained by introducing a smoothed threshold to one of the processing stages of the model. It is shown that, with this modification,the model can predict the perceived contrast and blur of a number of edge profiles that differ markedly from the ideal Gaussian edge profiles on which the templates are based. With only a few exceptions, the results from all the experiments on blur and contrast perception can be explained reasonably well using one set of parameters for each subject. In the few cases where the model fails, possible extensions to the model are discussed.
Resumo:
Background: The controversy surrounding the non-uniqueness of predictive gene lists (PGL) of small selected subsets of genes from very large potential candidates as available in DNA microarray experiments is now widely acknowledged 1. Many of these studies have focused on constructing discriminative semi-parametric models and as such are also subject to the issue of random correlations of sparse model selection in high dimensional spaces. In this work we outline a different approach based around an unsupervised patient-specific nonlinear topographic projection in predictive gene lists. Methods: We construct nonlinear topographic projection maps based on inter-patient gene-list relative dissimilarities. The Neuroscale, the Stochastic Neighbor Embedding(SNE) and the Locally Linear Embedding(LLE) techniques have been used to construct two-dimensional projective visualisation plots of 70 dimensional PGLs per patient, classifiers are also constructed to identify the prognosis indicator of each patient using the resulting projections from those visualisation techniques and investigate whether a-posteriori two prognosis groups are separable on the evidence of the gene lists. A literature-proposed predictive gene list for breast cancer is benchmarked against a separate gene list using the above methods. Generalisation ability is investigated by using the mapping capability of Neuroscale to visualise the follow-up study, but based on the projections derived from the original dataset. Results: The results indicate that small subsets of patient-specific PGLs have insufficient prognostic dissimilarity to permit a distinction between two prognosis patients. Uncertainty and diversity across multiple gene expressions prevents unambiguous or even confident patient grouping. Comparative projections across different PGLs provide similar results. Conclusion: The random correlation effect to an arbitrary outcome induced by small subset selection from very high dimensional interrelated gene expression profiles leads to an outcome with associated uncertainty. This continuum and uncertainty precludes any attempts at constructing discriminative classifiers. However a patient's gene expression profile could possibly be used in treatment planning, based on knowledge of other patients' responses. We conclude that many of the patients involved in such medical studies are intrinsically unclassifiable on the basis of provided PGL evidence. This additional category of 'unclassifiable' should be accommodated within medical decision support systems if serious errors and unnecessary adjuvant therapy are to be avoided.
Resumo:
Cochlear implants are prosthetic devices used to provide hearing to people who would otherwise be profoundly deaf. The deliberate addition of noise to the electrode signals could increase the amount of information transmitted, but standard cochlear implants do not replicate the noise characteristic of normal hearing because if noise is added in an uncontrolled manner with a limited number of electrodes then it will almost certainly lead to worse performance. Only if partially independent stochastic activity can be achieved in each nerve fibre can mechanisms like suprathreshold stochastic resonance be effective. We are investigating the use of stochastic beamforming to achieve greater independence. The strategy involves presenting each electrode with a linear combination of independent Gaussian noise sources. Because the cochlea is filled with conductive salt solutions, the noise currents from the electrodes interact and the effective stimulus for each nerve fibre will therefore be a different weighted sum of the noise sources. To some extent therefore, the effective stimulus for a nerve fibre will be independent of the effective stimulus of neighbouring fibres. For a particular patient, the electrode position and the amount of current spread are fixed. The objective is therefore to find the linear combination of noise sources that leads to the greatest independence between nerve discharges. In this theoretical study we show that it is possible to get one independent point of excitation (one null) for each electrode and that stochastic beamforming can greatly decrease the correlation between the noise exciting different regions of the cochlea. © 2007 Copyright SPIE - The International Society for Optical Engineering.
Resumo:
This paper attempts to address the effectiveness of physical-layer network coding (PNC) on the throughput improvement for multi-hop multicast in random wireless ad hoc networks (WAHNs). We prove that the per session throughput order with PNC is tightly bounded as T((nvmR (n))-1) if m = O(R-2 (n)), where n is the total number of nodes, R(n) is the communication range, and m is the number of destinations for each multicast session. We also show that per-session throughput order with PNC is tight bounded as T(n-1), when m = O(R-2(n)). The results of this paper imply that PNC cannot improve the throughput order of multicast in random WAHNs, which is different from the intuition that PNC may improve the throughput order as it allows simultaneous signal access and combination.
Resumo:
The performance of wireless networks is limited by multiple access interference (MAI) in the traditional communication approach where the interfered signals of the concurrent transmissions are treated as noise. In this paper, we treat the interfered signals from a new perspective on the basis of additive electromagnetic (EM) waves and propose a network coding based interference cancelation (NCIC) scheme. In the proposed scheme, adjacent nodes can transmit simultaneously with careful scheduling; therefore, network performance will not be limited by the MAI. Additionally we design a space segmentation method for general wireless ad hoc networks, which organizes network into clusters with regular shapes (e.g., square and hexagon) to reduce the number of relay nodes. The segmentation methodworks with the scheduling scheme and can help achieve better scalability and reduced complexity. We derive accurate analytic models for the probability of connectivity between two adjacent cluster heads which is important for successful information relay. We proved that with the proposed NCIC scheme, the transmission efficiency can be improved by at least 50% for general wireless networks as compared to the traditional interference avoidance schemes. Numeric results also show the space segmentation is feasible and effective. Finally we propose and discuss a method to implement the NCIC scheme in a practical orthogonal frequency division multiplexing (OFDM) communications networks. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
In this paper, the implementation aspects and constraints of the simplest network coding (NC) schemes for a two-way relay channel (TWRC) composed of a user equipment (mobile terminal), an LTE relay station (RS) and an LTE base station (eNB) are considered in order to assess the usefulness of the NC in more realistic scenarios. The information exchange rate gain (IERG), the energy reduction gain (ERG) and the resource utilization gain (RUG) of the NC schemes with and without subcarrier division duplexing (SDD) are obtained by computer simulations. The usefulness of the NC schemes are evaluated for varying traffic load levels, the geographical distances between the nodes, the RS transmit powers, and the maximum numbers of retransmissions. Simulation results show that the NC schemes with and without SDD, have the throughput gains 0.5% and 25%, the ERGs 7 - 12% and 16 - 25%, and the RUGs 0.5 - 3.2%, respectively. It is found that the NC can provide performance gains also for the users at the cell edge. Furthermore, the ERGs of the NC increase with the transmit power of the relay while the ERGs of the NC remain the same even when the maximum number of retransmissions is reduced.
Resumo:
This paper attempts to address the effectiveness of physical-layer network coding (PNC) on the capacity improvement for multi-hop multicast in random wireless ad hoc networks (WAHNs). While it can be shown that there is a capacity gain by PNC, we can prove that the per session throughput capacity with PNC is ? (nR(n))), where n is the total number of nodes, R(n) is the communication range, and each multicast session consists of a constant number of sinks. The result implies that PNC cannot improve the capacity order of multicast in random WAHNs, which is different from the intuition that PNC may improve the capacity order as it allows simultaneous signal reception and combination. Copyright © 2010 ACM.
Resumo:
This thesis considers sparse approximation of still images as the basis of a lossy compression system. The Matching Pursuit (MP) algorithm is presented as a method particularly suited for application in lossy scalable image coding. Its multichannel extension, capable of exploiting inter-channel correlations, is found to be an efficient way to represent colour data in RGB colour space. Known problems with MP, high computational complexity of encoding and dictionary design, are tackled by finding an appropriate partitioning of an image. The idea of performing MP in the spatio-frequency domain after transform such as Discrete Wavelet Transform (DWT) is explored. The main challenge, though, is to encode the image representation obtained after MP into a bit-stream. Novel approaches for encoding the atomic decomposition of a signal and colour amplitudes quantisation are proposed and evaluated. The image codec that has been built is capable of competing with scalable coders such as JPEG 2000 and SPIHT in terms of compression ratio.
Resumo:
Approximately half of current contact lens wearers suffer from dryness and discomfort, particularly towards the end of the day. Contact lens practitioners have a number of dry eye tests available to help them to predict which of their patients may be at risk of contact lens drop out and advise them accordingly. This thesis set out to rationalize them to see if any are of more diagnostic significance than others. This doctorate has found: (1) The Keratograph, a device which permits an automated, examiner independent technique for measuring non invasive tear break up time (NITBUT) measured NITBUT consistently shorter than measurements recorded with the Tearscope. When measuring central corneal curvature the spherical equivalent power of the cornea was measured as being significantly flatter than with a validated automated keratometer. (2) Non-invasive and invasive tear break-up times significantly correlated to each other, but not the other tear metrics. Symptomology, assessed using the OSDI questionnaire, correlated more with those tests indicating possible damage to the ocular surface (including LWE, LIPCOF and conjunctival staining) than with tests of either tear volume or stability. Cluster analysis showed some statistically significant groups of patients with different sign and symptom profiles. The largest cluster demonstrated poor tear quality with both non-invasive and invasive tests, low tear volume and more symptoms. (3) Care should be taken in fitting patients new to contact lenses if they have a NITBUT less than 10s or an OSDI comfort rating greater than 4.2 as they are more likely to drop-out within the first 6 months. Cluster analysis was not found to be beneficial in predicting which patients will succeed with lenses and which will not. A combination of the OSDI questionnaire and a NITBUT measurement was most useful both in diagnosing dry eye and in predicting contact lens drop out.