957 resultados para novel algorithm
Resumo:
Póster presentado en Escape 22, European Symposium on Computer Aided Process Engineering, University College London, UK, 17-20 June 2012.
Resumo:
In this paper, we propose a novel filter for feature selection. Such filter relies on the estimation of the mutual information between features and classes. We bypass the estimation of the probability density function with the aid of the entropic-graphs approximation of Rényi entropy, and the subsequent approximation of the Shannon one. The complexity of such bypassing process does not depend on the number of dimensions but on the number of patterns/samples, and thus the curse of dimensionality is circumvented. We show that it is then possible to outperform a greedy algorithm based on the maximal relevance and minimal redundancy criterion. We successfully test our method both in the contexts of image classification and microarray data classification.
Resumo:
In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Visualization of high-dimensional data has always been a challenging task. Here we discuss and propose variants of non-linear data projection methods (Generative Topographic Mapping (GTM) and GTM with simultaneous feature saliency (GTM-FS)) that are adapted to be effective on very high-dimensional data. The adaptations use log space values at certain steps of the Expectation Maximization (EM) algorithm and during the visualization process. We have tested the proposed algorithms by visualizing electrostatic potential data for Major Histocompatibility Complex (MHC) class-I proteins. The experiments show that the variation in the original version of GTM and GTM-FS worked successfully with data of more than 2000 dimensions and we compare the results with other linear/nonlinear projection methods: Principal Component Analysis (PCA), Neuroscale (NSC) and Gaussian Process Latent Variable Model (GPLVM).
Resumo:
A novel biosensing system based on a micromachined rectangular silicon membrane is proposed and investigated in this paper. A distributive sensing scheme is designed to monitor the dynamics of the sensing structure. An artificial neural network is used to process the measured data and to identify cell presence and density. Without specifying any particular bio-application, the investigation is mainly concentrated on the performance testing of this kind of biosensor as a general biosensing platform. The biosensing experiments on the microfabricated membranes involve seeding different cell densities onto the sensing surface of membrane, and measuring the corresponding dynamics information of each tested silicon membrane in the form of a series of frequency response functions (FRFs). All of those experiments are carried out in cell culture medium to simulate a practical working environment. The EA.hy 926 endothelial cell lines are chosen in this paper for the bio-experiments. The EA.hy 926 endothelial cell lines represent a particular class of biological particles that have irregular shapes, non-uniform density and uncertain growth behaviour, which are difficult to monitor using the traditional biosensors. The final predicted results reveal that the methodology of a neural-network based algorithm to perform the feature identification of cells from distributive sensory measurement has great potential in biosensing applications.
Resumo:
The thesis presents new methodology and algorithms that can be used to analyse and measure the hand tremor and fatigue of surgeons while performing surgery. This will assist them in deriving useful information about their fatigue levels, and make them aware of the changes in their tool point accuracies. This thesis proposes that muscular changes of surgeons, which occur through a day of operating, can be monitored using Electromyography (EMG) signals. The multi-channel EMG signals are measured at different muscles in the upper arm of surgeons. The dependence of EMG signals has been examined to test the hypothesis that EMG signals are coupled with and dependent on each other. The results demonstrated that EMG signals collected from different channels while mimicking an operating posture are independent. Consequently, single channel fatigue analysis has been performed. In measuring hand tremor, a new method for determining the maximum tremor amplitude using Principal Component Analysis (PCA) and a new technique to detrend acceleration signals using Empirical Mode Decomposition algorithm were introduced. This tremor determination method is more representative for surgeons and it is suggested as an alternative fatigue measure. This was combined with the complexity analysis method, and applied to surgically captured data to determine if operating has an effect on a surgeon’s fatigue and tremor levels. It was found that surgical tremor and fatigue are developed throughout a day of operating and that this could be determined based solely on their initial values. Finally, several Nonlinear AutoRegressive with eXogenous inputs (NARX) neural networks were evaluated. The results suggest that it is possible to monitor surgeon tremor variations during surgery from their EMG fatigue measurements.
Resumo:
Refraction simulators used for undergraduate training at Aston University did not realistically reflect variations in the relationship between vision and ametropia. This was because they used an algorithm, taken from the research literature, that strictly only applied to myopes or older hyperopes and did not factor in age and pupil diameter. The aim of this study was to generate new algorithms that overcame these limitations. Clinical data were collected from the healthy right eyes of 873 white subjects aged between 20 and 70 years. Vision and refractive error were recorded along with age and pupil diameter. Re-examination of 34 subjects enabled the calculation of coefficients of repeatability. The study population was slightly biased towards females and included many contact lens wearers. Sex and contact lens wear were, therefore, recorded in order to determine whether these might influence the findings. In addition, iris colour and cylinder axis orientation were recorded as these might also be influential. A novel Blur Sensitivity Ratio (BSR) was derived by dividing vision (expressed as minimum angle of resolution) by refractive error (expressed as a scalar vector, U). Alteration of the scalar vector, to account for additional vision reduction due to oblique cylinder axes, was not found to be useful. Decision tree analysis showed that sex, contact lens wear, iris colour and cylinder axis orientation did not influence the BSR. The following algorithms arose from two stepwise multiple linear regressions: BSR (myopes) = 1.13 + (0.24 x pupil diameter) + (0.14 x U) BSR (hyperopes) = (0.11 x pupil diameter) + (0.03 x age) - 0.22 These algorithms together accounted for 84% of the observed variance. They showed that pupil diameter influenced vision in both forms of ametropia. They also showed the age-related decline in the ability to accommodate in order to overcome reduced vision in hyperopia.
Resumo:
This paper presents a novel error-free (infinite-precision) architecture for the fast implementation of 8x8 2-D Discrete Cosine Transform. The architecture uses a new algebraic integer encoding of a 1-D radix-8 DCT that allows the separable computation of a 2-D 8x8 DCT without any intermediate number representation conversions. This is a considerable improvement on previously introduced algebraic integer encoding techniques to compute both DCT and IDCT which eliminates the requirements to approximate the transformation matrix ele- ments by obtaining their exact representations and hence mapping the transcendental functions without any errors. Apart from the multiplication-free nature, this new mapping scheme fits to this algorithm, eliminating any computational or quantization errors and resulting short-word-length and high-speed-design.
An efficient, approximate path-following algorithm for elastic net based nonlinear spike enhancement
Resumo:
Unwanted spike noise in a digital signal is a common problem in digital filtering. However, sometimes the spikes are wanted and other, superimposed, signals are unwanted, and linear, time invariant (LTI) filtering is ineffective because the spikes are wideband - overlapping with independent noise in the frequency domain. So, no LTI filter can separate them, necessitating nonlinear filtering. However, there are applications in which the noise includes drift or smooth signals for which LTI filters are ideal. We describe a nonlinear filter formulated as the solution to an elastic net regularization problem, which attenuates band-limited signals and independent noise, while enhancing superimposed spikes. Making use of known analytic solutions a novel, approximate path-following algorithm is given that provides a good, filtered output with reduced computational effort by comparison to standard convex optimization methods. Accurate performance is shown on real, noisy electrophysiological recordings of neural spikes.
Resumo:
Intraoperative neurophysiologic monitoring is an integral part of spinal surgeries and involves the recording of somatosensory evoked potentials (SSEP). However, clinical application of IONM still requires anywhere between 200 to 2000 trials to obtain an SSEP signal, which is excessive and introduces a significant delay during surgery to detect a possible neurological damage. The aim of this study is to develop a means to obtain the SSEP using a much less, twelve number of recordings. The preliminary step involved was to distinguish the SSEP with the ongoing brain activity. We first establish that the brain activity is indeed quasi-stationary whereas an SSEP is expected to be identical every time a trial is recorded. An algorithm was developed using Chebychev time windowing for preconditioning of SSEP trials to retain the morphological characteristics of somatosensory evoked potentials (SSEP). This preconditioning was followed by the application of a principal component analysis (PCA)-based algorithm utilizing quasi-stationarity of EEG on 12 preconditioned trials. A unique Walsh transform operation was then used to identify the position of the SSEP event. An alarm is raised when there is a 10% time in latency deviation and/or 50% peak-to-peak amplitude deviation, as per the clinical requirements. The algorithm shows consistency in the results in monitoring SSEP in up to 6-hour surgical procedures even under this significantly reduced number of trials. In this study, the analysis was performed on the data recorded in 29 patients undergoing surgery during which the posterior tibial nerve was stimulated and SSEP response was recorded from scalp. This method is shown empirically to be more clinically viable than present day approaches. In all 29 cases, the algorithm takes 4sec to extract an SSEP signal, as compared to conventional methods, which take several minutes. The monitoring process using the algorithm was successful and proved conclusive under the clinical constraints throughout the different surgical procedures with an accuracy of 91.5%. Higher accuracy and faster execution time, observed in the present study, in determining the SSEP signals provide a much improved and effective neurophysiological monitoring process.
Resumo:
Many classical as well as modern optimization techniques exist. One such modern method belonging to the field of swarm intelligence is termed ant colony optimization. This relatively new concept in optimization involves the use of artificial ants and is based on real ant behavior inspired by the way ants search for food. In this thesis, a novel ant colony optimization technique for continuous domains was developed. The goal was to provide improvements in computing time and robustness when compared to other optimization algorithms. Optimization function spaces can have extreme topologies and are therefore difficult to optimize. The proposed method effectively searched the domain and solved difficult single-objective optimization problems. The developed algorithm was run for numerous classic test cases for both single and multi-objective problems. The results demonstrate that the method is robust, stable, and that the number of objective function evaluations is comparable to other optimization algorithms.
Resumo:
The focus of this thesis is placed on text data compression based on the fundamental coding scheme referred to as the American Standard Code for Information Interchange or ASCII. The research objective is the development of software algorithms that result in significant compression of text data. Past and current compression techniques have been thoroughly reviewed to ensure proper contrast between the compression results of the proposed technique with those of existing ones. The research problem is based on the need to achieve higher compression of text files in order to save valuable memory space and increase the transmission rate of these text files. It was deemed necessary that the compression algorithm to be developed would have to be effective even for small files and be able to contend with uncommon words as they are dynamically included in the dictionary once they are encountered. A critical design aspect of this compression technique is its compatibility to existing compression techniques. In other words, the developed algorithm can be used in conjunction with existing techniques to yield even higher compression ratios. This thesis demonstrates such capabilities and such outcomes, and the research objective of achieving higher compression ratio is attained.
Resumo:
The effectiveness of an optimization algorithm can be reduced to its ability to navigate an objective function’s topology. Hybrid optimization algorithms combine various optimization algorithms using a single meta-heuristic so that the hybrid algorithm is more robust, computationally efficient, and/or accurate than the individual algorithms it is made of. This thesis proposes a novel meta-heuristic that uses search vectors to select the constituent algorithm that is appropriate for a given objective function. The hybrid is shown to perform competitively against several existing hybrid and non-hybrid optimization algorithms over a set of three hundred test cases. This thesis also proposes a general framework for evaluating the effectiveness of hybrid optimization algorithms. Finally, this thesis presents an improved Method of Characteristics Code with novel boundary conditions, which better characterizes pipelines than previous codes. This code is coupled with the hybrid optimization algorithm in order to optimize the operation of real-world piston pumps.
Resumo:
Uncertainty in decision-making for patients’ risk of re-admission arises due to non-uniform data and lack of knowledge in health system variables. The knowledge of the impact of risk factors will provide clinicians better decision-making and in reducing the number of patients admitted to the hospital. Traditional approaches are not capable to account for the uncertain nature of risk of hospital re-admissions. More problems arise due to large amount of uncertain information. Patients can be at high, medium or low risk of re-admission, and these strata have ill-defined boundaries. We believe that our model that adapts fuzzy regression method will start a novel approach to handle uncertain data, uncertain relationships between health system variables and the risk of re-admission. Because of nature of ill-defined boundaries of risk bands, this approach does allow the clinicians to target individuals at boundaries. Targeting individuals at boundaries and providing them proper care may provide some ability to move patients from high risk to low risk band. In developing this algorithm, we aimed to help potential users to assess the patients for various risk score thresholds and avoid readmission of high risk patients with proper interventions. A model for predicting patients at high risk of re-admission will enable interventions to be targeted before costs have been incurred and health status have deteriorated. A risk score cut off level would flag patients and result in net savings where intervention costs are much higher per patient. Preventing hospital re-admissions is important for patients, and our algorithm may also impact hospital income.