908 resultados para Pre-processing


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We are developing a telemedicine application which offers automated diagnosis of facial (Bell's) palsy through a Web service. We used a test data set of 43 images of facial palsy patients and 44 normal people to develop the automatic recognition algorithm. Three different image pre-processing methods were used. Machine learning techniques (support vector machine, SVM) were used to examine the difference between the two halves of the face. If there was a sufficient difference, then the SVM recognized facial palsy. Otherwise, if the halves were roughly symmetrical, the SVM classified the image as normal. It was found that the facial palsy images had a greater Hamming Distance than the normal images, indicating greater asymmetry. The median distance in the normal group was 331 (interquartile range 277-435) and the median distance in the facial palsy group was 509 (interquartile range 334-703). This difference was significant (P

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Frequent Itemsets mining is well explored for various data types, and its computational complexity is well understood. There are methods to deal effectively with computational problems. This paper shows another approach to further performance enhancements of frequent items sets computation. We have made a series of observations that led us to inventing data pre-processing methods such that the final step of the Partition algorithm, where a combination of all local candidate sets must be processed, is executed on substantially smaller input data. The paper shows results from several experiments that confirmed our general and formally presented observations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is a study of three techniques to improve performance of some standard fore-casting models, application to the energy demand and prices. We focus on forecasting demand and price one-day ahead. First, the wavelet transform was used as a pre-processing procedure with two approaches: multicomponent-forecasts and direct-forecasts. We have empirically compared these approaches and found that the former consistently outperformed the latter. Second, adaptive models were introduced to continuously update model parameters in the testing period by combining ?lters with standard forecasting methods. Among these adaptive models, the adaptive LR-GARCH model was proposed for the fi?rst time in the thesis. Third, with regard to noise distributions of the dependent variables in the forecasting models, we used either Gaussian or Student-t distributions. This thesis proposed a novel algorithm to infer parameters of Student-t noise models. The method is an extension of earlier work for models that are linear in parameters to the non-linear multilayer perceptron. Therefore, the proposed method broadens the range of models that can use a Student-t noise distribution. Because these techniques cannot stand alone, they must be combined with prediction models to improve their performance. We combined these techniques with some standard forecasting models: multilayer perceptron, radial basis functions, linear regression, and linear regression with GARCH. These techniques and forecasting models were applied to two datasets from the UK energy markets: daily electricity demand (which is stationary) and gas forward prices (non-stationary). The results showed that these techniques provided good improvement to prediction performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis documents the design, manufacture and testing of a passive and non-invasive micro-scale planar particle-from-fluid filter for segregating cell types from a homogeneous suspension. The microfluidics system can be used to separate spermatogenic cells from testis biopsy samples, providing a mechanism for filtrate retrieval for assisted reproduction therapy. The system can also be used for point-of-service diagnostics applications for hospitals, lab-on-a-chip pre-processing and field applications such as clinical testing in the third world. Various design concepts are developed and manufactured, and are assessed based on etched structure morphology, robustness to variations in the manufacturing process, and design impacts on fluid flow and particle separation characteristics. Segregation was measured using image processing algorithms that demonstrate efficiency is more than 55% for 1 µl volumes at populations exceeding 1 x 107. the technique supports a significant reduction in time over conventional processing, in the separation and identification of particle groups, offering a potential reduction in the associated cost of the targeted procedure. The thesis has developed a model of quasi-steady wetting flow within the micro channel and identifies the forces across the system during post-wetting equalisation. The model and its underlying assumptions are validated empirically in microfabricated test structures through a novel Micro-Particle Image Velocimetry technique. The prototype devices do not require ancillary equipment nor additional filtration media, and therefore offer fewer opportunities for sample contamination over conventional processing methods. The devices are disposable with minimal reagent volumes and process waste. Optimal processing parameters and production methods are identified with any improvements that could be made to enhance their performance in a number of identified potential applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A number of papers and reports covering the techno-economic analysis of bio-oil production has been published. These have had different scopes, use different feedstocks and reflected national cost structures. This paper reviews and compares their cost estimates and the experimental results that underpin them. A comprehensive cost and performance model was produced based on consensus data from the previous studies or stated scenarios where data is not available that reflected UK costs. The model takes account sales of bio-char that is a co-product of pyrolysis and the electricity consumption of the pyrolysis plant and biomass pre-processing plants. It was concluded that it should be able to produce bio-oil in the UK from energy crops for a similar cost as distillate fuel oil. It was also found that there was little difference in the processing cost for woodchips and baled miscanthus. © 2011 Elsevier Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Secondary fibre paper mills are significant users of both heat and electricity which is mainly derived from the combustion of fossil fuels. The cost of producing this energy is increasing year upon year. These mills are also significant producers of fibrous sludge and reject waste material which can contain high amounts of useful energy. Currently the majority of these waste fractions are disposed of by landfill, land-spread or incineration using natural gas. These disposal methods not only present environmental problems but are also very costly. The focus of this work was to utilise the waste fractions produced at secondary fibre paper mills for the on-site production of combined heat and power (CHP) using advanced thermal conversion methods (gasification and pyrolysis), well suited to relatively small scales of throughput. The heat and power can either be used on-site or exported. The first stage of the work was the development of methods to condition selected paper industry wastes to enable thermal conversion. This stage required detailed characterisation of the waste streams in terms of proximate and ultimate analysis and heat content. Suitable methods to dry and condition the wastes in preparation for thermal conversion were also explored. Through trials at pilot scale with both fixed bed downdraft gasification and intermediate pyrolysis systems, the energy recovered from selected wastes and waste blends in the form of product gas and pyrolysis products was quantified. The optimal process routes were selected based on the experimental results, and implementation studies were carried out at the selected candidate mills. The studies consider the pre-processing of the wastes, thermal conversion, and full integration of the energy products. The final stage of work was an economic analysis to quantify economic gain, return on investment and environmental benefits from the proposed processes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Smart cameras allow pre-processing of video data on the camera instead of sending it to a remote server for further analysis. Having a network of smart cameras allows various vision tasks to be processed in a distributed fashion. While cameras may have different tasks, we concentrate on distributed tracking in smart camera networks. This application introduces various highly interesting problems. Firstly, how can conflicting goals be satisfied such as cameras in the network try to track objects while also trying to keep communication overhead low? Secondly, how can cameras in the network self adapt in response to the behavior of objects and changes in scenarios, to ensure continued efficient performance? Thirdly, how can cameras organise themselves to improve the overall network's performance and efficiency? This paper presents a simulation environment, called CamSim, allowing distributed self-adaptation and self-organisation algorithms to be tested, without setting up a physical smart camera network. The simulation tool is written in Java and hence allows high portability between different operating systems. Relaxing various problems of computer vision and network communication enables a focus on implementing and testing new self-adaptation and self-organisation algorithms for cameras to use.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Allergy is a form of hypersensitivity to normally innocuous substances, such as dust, pollen, foods or drugs. Allergens are small antigens that commonly provoke an IgE antibody response. There are two types of bioinformatics-based allergen prediction. The first approach follows FAO/WHO Codex alimentarius guidelines and searches for sequence similarity. The second approach is based on identifying conserved allergenicity-related linear motifs. Both approaches assume that allergenicity is a linearly coded property. In the present study, we applied ACC pre-processing to sets of known allergens, developing alignment-independent models for allergen recognition based on the main chemical properties of amino acid sequences.Results: A set of 684 food, 1,156 inhalant and 555 toxin allergens was collected from several databases. A set of non-allergens from the same species were selected to mirror the allergen set. The amino acids in the protein sequences were described by three z-descriptors (z1, z2 and z3) and by auto- and cross-covariance (ACC) transformation were converted into uniform vectors. Each protein was presented as a vector of 45 variables. Five machine learning methods for classification were applied in the study to derive models for allergen prediction. The methods were: discriminant analysis by partial least squares (DA-PLS), logistic regression (LR), decision tree (DT), naïve Bayes (NB) and k nearest neighbours (kNN). The best performing model was derived by kNN at k = 3. It was optimized, cross-validated and implemented in a server named AllerTOP, freely accessible at http://www.pharmfac.net/allertop. AllerTOP also predicts the most probable route of exposure. In comparison to other servers for allergen prediction, AllerTOP outperforms them with 94% sensitivity.Conclusions: AllerTOP is the first alignment-free server for in silico prediction of allergens based on the main physicochemical properties of proteins. Significantly, as well allergenicity AllerTOP is able to predict the route of allergen exposure: food, inhalant or toxin. © 2013 Dimitrov et al.; licensee BioMed Central Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modern geographical databases, which are at the core of geographic information systems (GIS), store a rich set of aspatial attributes in addition to geographic data. Typically, aspatial information comes in textual and numeric format. Retrieving information constrained on spatial and aspatial data from geodatabases provides GIS users the ability to perform more interesting spatial analyses, and for applications to support composite location-aware searches; for example, in a real estate database: “Find the nearest homes for sale to my current location that have backyard and whose prices are between $50,000 and $80,000”. Efficient processing of such queries require combined indexing strategies of multiple types of data. Existing spatial query engines commonly apply a two-filter approach (spatial filter followed by nonspatial filter, or viceversa), which can incur large performance overheads. On the other hand, more recently, the amount of geolocation data has grown rapidly in databases due in part to advances in geolocation technologies (e.g., GPS-enabled smartphones) that allow users to associate location data to objects or events. The latter poses potential data ingestion challenges of large data volumes for practical GIS databases. In this dissertation, we first show how indexing spatial data with R-trees (a typical data pre-processing task) can be scaled in MapReduce—a widely-adopted parallel programming model for data intensive problems. The evaluation of our algorithms in a Hadoop cluster showed close to linear scalability in building R-tree indexes. Subsequently, we develop efficient algorithms for processing spatial queries with aspatial conditions. Novel techniques for simultaneously indexing spatial with textual and numeric data are developed to that end. Experimental evaluations with real-world, large spatial datasets measured query response times within the sub-second range for most cases, and up to a few seconds for a small number of cases, which is reasonable for interactive applications. Overall, the previous results show that the MapReduce parallel model is suitable for indexing tasks in spatial databases, and the adequate combination of spatial and aspatial attribute indexes can attain acceptable response times for interactive spatial queries with constraints on aspatial data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Shadows and illumination play an important role when generating a realistic scene in computer graphics. Most of the Augmented Reality (AR) systems track markers placed in a real scene and retrieve their position and orientation to serve as a frame of reference for added computer generated content, thereby producing an augmented scene. Realistic depiction of augmented content with coherent visual cues is a desired goal in many AR applications. However, rendering an augmented scene with realistic illumination is a complex task. Many existent approaches rely on a non automated pre-processing phase to retrieve illumination parameters from the scene. Other techniques rely on specific markers that contain light probes to perform environment lighting estimation. This study aims at designing a method to create AR applications with coherent illumination and shadows, using a textured cuboid marker, that does not require a training phase to provide lighting information. Such marker may be easily found in common environments: most of product packaging satisfies such characteristics. Thus, we propose a way to estimate a directional light configuration using multiple texture tracking to render AR scenes in a realistic fashion. We also propose a novel feature descriptor that is used to perform multiple texture tracking. Our descriptor is an extension of the binary descriptor, named discrete descriptor, and outperforms current state-of-the-art methods in speed, while maintaining their accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Speckle is being used as a characterization tool for the analysis of the dynamic of slow varying phenomena occurring in biological and industrial samples. The retrieved data takes the form of a sequence of speckle images. The analysis of these images should reveal the inner dynamic of the biological or physical process taking place in the sample. Very recently, it has been shown that principal component analysis is able to split the original data set in a collection of classes. These classes can be related with the dynamic of the observed phenomena. At the same time, statistical descriptors of biospeckle images have been used to retrieve information on the characteristics of the sample. These statistical descriptors can be calculated in almost real time and provide a fast monitoring of the sample. On the other hand, principal component analysis requires longer computation time but the results contain more information related with spatial-temporal pattern that can be identified with physical process. This contribution merges both descriptions and uses principal component analysis as a pre-processing tool to obtain a collection of filtered images where a simpler statistical descriptor can be calculated. The method has been applied to slow-varying biological and industrial processes

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background and aims: Machine learning techniques for the text mining of cancer-related clinical documents have not been sufficiently explored. Here some techniques are presented for the pre-processing of free-text breast cancer pathology reports, with the aim of facilitating the extraction of information relevant to cancer staging.

Materials and methods: The first technique was implemented using the freely available software RapidMiner to classify the reports according to their general layout: ‘semi-structured’ and ‘unstructured’. The second technique was developed using the open source language engineering framework GATE and aimed at the prediction of chunks of the report text containing information pertaining to the cancer morphology, the tumour size, its hormone receptor status and the number of positive nodes. The classifiers were trained and tested respectively on sets of 635 and 163 manually classified or annotated reports, from the Northern Ireland Cancer Registry.

Results: The best result of 99.4% accuracy – which included only one semi-structured report predicted as unstructured – was produced by the layout classifier with the k nearest algorithm, using the binary term occurrence word vector type with stopword filter and pruning. For chunk recognition, the best results were found using the PAUM algorithm with the same parameters for all cases, except for the prediction of chunks containing cancer morphology. For semi-structured reports the performance ranged from 0.97 to 0.94 and from 0.92 to 0.83 in precision and recall, while for unstructured reports performance ranged from 0.91 to 0.64 and from 0.68 to 0.41 in precision and recall. Poor results were found when the classifier was trained on semi-structured reports but tested on unstructured.

Conclusions: These results show that it is possible and beneficial to predict the layout of reports and that the accuracy of prediction of which segments of a report may contain certain information is sensitive to the report layout and the type of information sought.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hyperspectral instruments have been incorporated in satellite missions, providing data of high spectral resolution of the Earth. This data can be used in remote sensing applications, such as, target detection, hazard prevention, and monitoring oil spills, among others. In most of these applications, one of the requirements of paramount importance is the ability to give real-time or near real-time response. Recently, onboard processing systems have emerged, in order to overcome the huge amount of data to transfer from the satellite to the ground station, and thus, avoiding delays between hyperspectral image acquisition and its interpretation. For this purpose, compact reconfigurable hardware modules, such as field programmable gate arrays (FPGAs) are widely used. This paper proposes a parallel FPGA-based architecture for endmember’s signature extraction. This method based on the Vertex Component Analysis (VCA) has several advantages, namely it is unsupervised, fully automatic, and it works without dimensionality reduction (DR) pre-processing step. The architecture has been designed for a low cost Xilinx Zynq board with a Zynq-7020 SoC FPGA based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data sets collected by the NASA’s Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the Cuprite mining district in Nevada. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low cost embedded systems, opening new perspectives for onboard hyperspectral image processing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A simple but efficient voice activity detector based on the Hilbert transform and a dynamic threshold is presented to be used on the pre-processing of audio signals -- The algorithm to define the dynamic threshold is a modification of a convex combination found in literature -- This scheme allows the detection of prosodic and silence segments on a speech in presence of non-ideal conditions like a spectral overlapped noise -- The present work shows preliminary results over a database built with some political speech -- The tests were performed adding artificial noise to natural noises over the audio signals, and some algorithms are compared -- Results will be extrapolated to the field of adaptive filtering on monophonic signals and the analysis of speech pathologies on futures works