36 resultados para Image pre-processing

em Aston University Research Archive


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis documents the design, manufacture and testing of a passive and non-invasive micro-scale planar particle-from-fluid filter for segregating cell types from a homogeneous suspension. The microfluidics system can be used to separate spermatogenic cells from testis biopsy samples, providing a mechanism for filtrate retrieval for assisted reproduction therapy. The system can also be used for point-of-service diagnostics applications for hospitals, lab-on-a-chip pre-processing and field applications such as clinical testing in the third world. Various design concepts are developed and manufactured, and are assessed based on etched structure morphology, robustness to variations in the manufacturing process, and design impacts on fluid flow and particle separation characteristics. Segregation was measured using image processing algorithms that demonstrate efficiency is more than 55% for 1 µl volumes at populations exceeding 1 x 107. the technique supports a significant reduction in time over conventional processing, in the separation and identification of particle groups, offering a potential reduction in the associated cost of the targeted procedure. The thesis has developed a model of quasi-steady wetting flow within the micro channel and identifies the forces across the system during post-wetting equalisation. The model and its underlying assumptions are validated empirically in microfabricated test structures through a novel Micro-Particle Image Velocimetry technique. The prototype devices do not require ancillary equipment nor additional filtration media, and therefore offer fewer opportunities for sample contamination over conventional processing methods. The devices are disposable with minimal reagent volumes and process waste. Optimal processing parameters and production methods are identified with any improvements that could be made to enhance their performance in a number of identified potential applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis is a study of three techniques to improve performance of some standard fore-casting models, application to the energy demand and prices. We focus on forecasting demand and price one-day ahead. First, the wavelet transform was used as a pre-processing procedure with two approaches: multicomponent-forecasts and direct-forecasts. We have empirically compared these approaches and found that the former consistently outperformed the latter. Second, adaptive models were introduced to continuously update model parameters in the testing period by combining ?lters with standard forecasting methods. Among these adaptive models, the adaptive LR-GARCH model was proposed for the fi?rst time in the thesis. Third, with regard to noise distributions of the dependent variables in the forecasting models, we used either Gaussian or Student-t distributions. This thesis proposed a novel algorithm to infer parameters of Student-t noise models. The method is an extension of earlier work for models that are linear in parameters to the non-linear multilayer perceptron. Therefore, the proposed method broadens the range of models that can use a Student-t noise distribution. Because these techniques cannot stand alone, they must be combined with prediction models to improve their performance. We combined these techniques with some standard forecasting models: multilayer perceptron, radial basis functions, linear regression, and linear regression with GARCH. These techniques and forecasting models were applied to two datasets from the UK energy markets: daily electricity demand (which is stationary) and gas forward prices (non-stationary). The results showed that these techniques provided good improvement to prediction performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A number of papers and reports covering the techno-economic analysis of bio-oil production has been published. These have had different scopes, use different feedstocks and reflected national cost structures. This paper reviews and compares their cost estimates and the experimental results that underpin them. A comprehensive cost and performance model was produced based on consensus data from the previous studies or stated scenarios where data is not available that reflected UK costs. The model takes account sales of bio-char that is a co-product of pyrolysis and the electricity consumption of the pyrolysis plant and biomass pre-processing plants. It was concluded that it should be able to produce bio-oil in the UK from energy crops for a similar cost as distillate fuel oil. It was also found that there was little difference in the processing cost for woodchips and baled miscanthus. © 2011 Elsevier Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Secondary fibre paper mills are significant users of both heat and electricity which is mainly derived from the combustion of fossil fuels. The cost of producing this energy is increasing year upon year. These mills are also significant producers of fibrous sludge and reject waste material which can contain high amounts of useful energy. Currently the majority of these waste fractions are disposed of by landfill, land-spread or incineration using natural gas. These disposal methods not only present environmental problems but are also very costly. The focus of this work was to utilise the waste fractions produced at secondary fibre paper mills for the on-site production of combined heat and power (CHP) using advanced thermal conversion methods (gasification and pyrolysis), well suited to relatively small scales of throughput. The heat and power can either be used on-site or exported. The first stage of the work was the development of methods to condition selected paper industry wastes to enable thermal conversion. This stage required detailed characterisation of the waste streams in terms of proximate and ultimate analysis and heat content. Suitable methods to dry and condition the wastes in preparation for thermal conversion were also explored. Through trials at pilot scale with both fixed bed downdraft gasification and intermediate pyrolysis systems, the energy recovered from selected wastes and waste blends in the form of product gas and pyrolysis products was quantified. The optimal process routes were selected based on the experimental results, and implementation studies were carried out at the selected candidate mills. The studies consider the pre-processing of the wastes, thermal conversion, and full integration of the energy products. The final stage of work was an economic analysis to quantify economic gain, return on investment and environmental benefits from the proposed processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Smart cameras allow pre-processing of video data on the camera instead of sending it to a remote server for further analysis. Having a network of smart cameras allows various vision tasks to be processed in a distributed fashion. While cameras may have different tasks, we concentrate on distributed tracking in smart camera networks. This application introduces various highly interesting problems. Firstly, how can conflicting goals be satisfied such as cameras in the network try to track objects while also trying to keep communication overhead low? Secondly, how can cameras in the network self adapt in response to the behavior of objects and changes in scenarios, to ensure continued efficient performance? Thirdly, how can cameras organise themselves to improve the overall network's performance and efficiency? This paper presents a simulation environment, called CamSim, allowing distributed self-adaptation and self-organisation algorithms to be tested, without setting up a physical smart camera network. The simulation tool is written in Java and hence allows high portability between different operating systems. Relaxing various problems of computer vision and network communication enables a focus on implementing and testing new self-adaptation and self-organisation algorithms for cameras to use.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Allergy is a form of hypersensitivity to normally innocuous substances, such as dust, pollen, foods or drugs. Allergens are small antigens that commonly provoke an IgE antibody response. There are two types of bioinformatics-based allergen prediction. The first approach follows FAO/WHO Codex alimentarius guidelines and searches for sequence similarity. The second approach is based on identifying conserved allergenicity-related linear motifs. Both approaches assume that allergenicity is a linearly coded property. In the present study, we applied ACC pre-processing to sets of known allergens, developing alignment-independent models for allergen recognition based on the main chemical properties of amino acid sequences.Results: A set of 684 food, 1,156 inhalant and 555 toxin allergens was collected from several databases. A set of non-allergens from the same species were selected to mirror the allergen set. The amino acids in the protein sequences were described by three z-descriptors (z1, z2 and z3) and by auto- and cross-covariance (ACC) transformation were converted into uniform vectors. Each protein was presented as a vector of 45 variables. Five machine learning methods for classification were applied in the study to derive models for allergen prediction. The methods were: discriminant analysis by partial least squares (DA-PLS), logistic regression (LR), decision tree (DT), naïve Bayes (NB) and k nearest neighbours (kNN). The best performing model was derived by kNN at k = 3. It was optimized, cross-validated and implemented in a server named AllerTOP, freely accessible at http://www.pharmfac.net/allertop. AllerTOP also predicts the most probable route of exposure. In comparison to other servers for allergen prediction, AllerTOP outperforms them with 94% sensitivity.Conclusions: AllerTOP is the first alignment-free server for in silico prediction of allergens based on the main physicochemical properties of proteins. Significantly, as well allergenicity AllerTOP is able to predict the route of allergen exposure: food, inhalant or toxin. © 2013 Dimitrov et al.; licensee BioMed Central Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Textured regions in images can be defined as those regions containing a signal which has some measure of randomness. This thesis is concerned with the description of homogeneous texture in terms of a signal model and to develop a means of spatially separating regions of differing texture. A signal model is presented which is based on the assumption that a large class of textures can adequately be represented by their Fourier amplitude spectra only, with the phase spectra modelled by a random process. It is shown that, under mild restrictions, the above model leads to a stationary random process. Results indicate that this assumption is valid for those textures lacking significant local structure. A texture segmentation scheme is described which separates textured regions based on the assumption that each texture has a different distribution of signal energy within its amplitude spectrum. A set of bandpass quadrature filters are applied to the original signal and the envelope of the output of each filter taken. The filters are designed to have maximum mutual energy concentration in both the spatial and spatial frequency domains thus providing high spatial and class resolutions. The outputs of these filters are processed using a multi-resolution classifier which applies a clustering algorithm on the data at a low spatial resolution and then performs a boundary estimation operation in which processing is carried out over a range of spatial resolutions. Results demonstrate a high performance, in terms of the classification error, for a range of synthetic and natural textures

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this Interdisciplinary Higher Degrees project was the development of a high-speed method of photometrically testing vehicle headlamps, based on the use of image processing techniques, for Lucas Electrical Limited. Photometric testing involves measuring the illuminance produced by a lamp at certain points in its beam distribution. Headlamp performance is best represented by an iso-lux diagram, showing illuminance contours, produced from a two-dimensional array of data. Conventionally, the tens of thousands of measurements required are made using a single stationary photodetector and a two-dimensional mechanical scanning system which enables a lamp's horizontal and vertical orientation relative to the photodetector to be changed. Even using motorised scanning and computerised data-logging, the data acquisition time for a typical iso-lux test is about twenty minutes. A detailed study was made of the concept of using a video camera and a digital image processing system to scan and measure a lamp's beam without the need for the time-consuming mechanical movement. Although the concept was shown to be theoretically feasible, and a prototype system designed, it could not be implemented because of the technical limitations of commercially-available equipment. An alternative high-speed approach was developed, however, and a second prototype syqtem designed. The proposed arrangement again uses an image processing system, but in conjunction with a one-dimensional array of photodetectors and a one-dimensional mechanical scanning system in place of a video camera. This system can be implemented using commercially-available equipment and, although not entirely eliminating the need for mechanical movement, greatly reduces the amount required, resulting in a predicted data acquisiton time of about twenty seconds for a typical iso-lux test. As a consequence of the work undertaken, the company initiated an 80,000 programme to implement the system proposed by the author.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Sensory processing is a crucial underpinning of the development of social cognition, a function which is compromised in variable degree in patients with pervasive developmental disorders (PDD). In this manuscript, we review some of the most recent and relevant contributions, which have looked at auditory sensory processing derangement in PDD. The variability in the clinical characteristics of the samples studied so far, in terms of severity of the associated cognitive deficits and associated limited compliance, underlying aetiology and demographic features makes a univocal interpretation arduous. We hypothesise that, in patients with severe mental deficits, the presence of impaired auditory sensory memory as expressed by the mismatch negativity could be a non-specific indicator of more diffuse cortical deficits rather than causally related to the clinical symptomatology. More consistent findings seem to emerge from studies on less severely impaired patients, in whom increased pitch perception has been interpreted as an indicator of increased local processing, probably as compensatory mechanism for the lack of global processing (central coherence). This latter hypothesis seems extremely attractive and future trials in larger cohorts of patients, possibly standardising the characteristics of the stimuli are a much-needed development. Finally, specificity of the role of the auditory derangement as opposed to other sensory channels needs to be assessed more systematically using multimodal stimuli in the same patient group. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Accurate measurement of intervertebral kinematics of the cervical spine can support the diagnosis of widespread diseases related to neck pain, such as chronic whiplash dysfunction, arthritis, and segmental degeneration. The natural inaccessibility of the spine, its complex anatomy, and the small range of motion only permit concise measurement in vivo. Low dose X-ray fluoroscopy allows time-continuous screening of cervical spine during patient's spontaneous motion. To obtain accurate motion measurements, each vertebra was tracked by means of image processing along a sequence of radiographic images. To obtain a time-continuous representation of motion and to reduce noise in the experimental data, smoothing spline interpolation was used. Estimation of intervertebral motion for cervical segments was obtained by processing patient's fluoroscopic sequence; intervertebral angle and displacement and the instantaneous centre of rotation were computed. The RMS value of fitting errors resulted in about 0.2 degree for rotation and 0.2 mm for displacements. © 2013 Paolo Bifulco et al.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two experiments examined the extent to which attitudes changed following majority and minority influence are resistant to counter-persuasion. In both experiments participants' attitudes were measured after being exposed to two messages, delayed in time, which argued opposite positions (initial message and counter-message). In the first experiment, attitudes following minority endorsement of the initial message were more resistant to a second counter-message only when the initial message contained strong versus weak arguments. Attitudes changed following majority influence did not resist the second counter-message and returned to their pre-test level. Experiment 2 varied whether memory was warned (i.e., message recipients expected to recall the message) or not, to manipulate message processing. When memory was warned, which should increase message processing, attitudes changed following both majority and minority influence resisted the second counter-message. The results support the view that minority influence instigates systematic processing of its arguments, leading to attitudes that resist counter-persuasion. Attitudes formed following majority influence yield to counter-persuasion unless there is a secondary task that encourages message processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growth and advances made in computer technology have led to the present interest in picture processing techniques. When considering image data compression the tendency is towards trans-form source coding of the image data. This method of source coding has reached a stage where very high reductions in the number of bits representing the data can be made while still preserving image fidelity. The point has thus been reached where channel errors need to be considered, as these will be inherent in any image comnunication system. The thesis first describes general source coding of images with the emphasis almost totally on transform coding. The transform technique adopted is the Discrete Cosine Transform (DCT) which becomes common to both transform coders. Hereafter the techniques of source coding differ substantially i.e. one tech­nique involves zonal coding, the other involves threshold coding. Having outlined the theory and methods of implementation of the two source coders, their performances are then assessed first in the absence, and then in the presence, of channel errors. These tests provide a foundation on which to base methods of protection against channel errors. Six different protection schemes are then proposed. Results obtained, from each particular, combined, source and channel error protection scheme, which are described in full are then presented. Comparisons are made between each scheme and indicate the best one to use given a particular channel error rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital image processing is exploited in many diverse applications but the size of digital images places excessive demands on current storage and transmission technology. Image data compression is required to permit further use of digital image processing. Conventional image compression techniques based on statistical analysis have reached a saturation level so it is necessary to explore more radical methods. This thesis is concerned with novel methods, based on the use of fractals, for achieving significant compression of image data within reasonable processing time without introducing excessive distortion. Images are modelled as fractal data and this model is exploited directly by compression schemes. The validity of this is demonstrated by showing that the fractal complexity measure of fractal dimension is an excellent predictor of image compressibility. A method of fractal waveform coding is developed which has low computational demands and performs better than conventional waveform coding methods such as PCM and DPCM. Fractal techniques based on the use of space-filling curves are developed as a mechanism for hierarchical application of conventional techniques. Two particular applications are highlighted: the re-ordering of data during image scanning and the mapping of multi-dimensional data to one dimension. It is shown that there are many possible space-filling curves which may be used to scan images and that selection of an optimum curve leads to significantly improved data compression. The multi-dimensional mapping property of space-filling curves is used to speed up substantially the lookup process in vector quantisation. Iterated function systems are compared with vector quantisers and the computational complexity or iterated function system encoding is also reduced by using the efficient matching algcnithms identified for vector quantisers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Category-specific disorders are frequently explained by suggesting that living and non-living things are processed in separate subsystems (e.g. Caramazza & Shelton, 1998). If subsystems exist, there should be benefits for normal processing, beyond the influence of structural similarity. However, no previous study has separated the relative influences of similarity and semantic category. We created novel examples of living and non-living things so category and similarity could be manipulated independently. Pre-tests ensured that our images evoked appropriate semantic information and were matched for familiarity. Participants were trained to associate names with the images and then performed a name-verification task under two levels of time pressure. We found no significant advantage for living things alongside strong effects of similarity. Our results suggest that similarity rather than category is the key determinant of speed and accuracy in normal semantic processing. We discuss the implications of this finding for neuropsychological studies. © 2005 Psychology Press Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functionalisation of polystyrene, PS, and ethylene-co-propylene-co-cyclopentadiene terpolymer, EPDM, with acrylic acid, AA, in a melt reactive processing procedure, in the presence of peroxide, trigonox 101, and coagents, Divinyl benzene, DVB (for PS), and trimethylolpropane triacrylate, TRIS (for EPDM), were successfully carried out. The level of grafting of the AA, as determined by infrared analysis, was significantly enhanced by the coagents. The grafting reaction of AA takes place simultaneously with homopolymerisation of the monomers, melt degradation and crosslinking reactions of the polymers. The extent of these competing reactions were inferred from measurements of melt flow index and insoluble gel content. Through a judicious use of both the peroxide and the coagent, particularly TRIS, unwanted side reactions were minimized. Five different processing methods were investigated for both functionalisation experiments; the direct addition of the pre-mixed polymer with peroxide and reactive modifiers was found to give optimum condition for grafting. The functionalised PS, F-PS, and EPDM, F-EPD, and maleinised polypropylene carrying a potential antioxidant, N-(4-anilinophenyl maleimide), F-PP were melt blended in binary mixtures of F-PS/F-EPD and F-PP/F-EPD in the presence (or absence) of organic diamines which act as an interlinking agent, e.g, Ethylene Diamine, EDA, and Hexamethylene Diamine, HEMDA. The presence of an interlinking agent, particularly HEMDA shows significant enhancement in the mechanical properties of the blend, suggesting that the copolymer formed has acted as compatibiliser to the otherwise incompatible polymer pairs. The functionalised and amidised blends, F and A-PSIEPDM (SPOI) and F and A-PPIEPDM (SPD2) were subsequently used as compatibiliser concentrates in the corresponding PSIEPDM and PPIEPDM blends containing various weight propotion of the homopolymers. The SPD1 caused general decreased in tensile strength, albeit increased in drop impact strength particularly in blend containing high PS content (80%). The SPD2 was particularly effective in enhancing impact strength in blends containing low weight ratio of PP (<70%). The SPD2 was also a good thermal antioxidant albeit less effective than commercial antioxidant. In all blends the evidence of compatibility was examined by scanning electron microscopy.