854 resultados para data gathering algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Refraction simulators used for undergraduate training at Aston University did not realistically reflect variations in the relationship between vision and ametropia. This was because they used an algorithm, taken from the research literature, that strictly only applied to myopes or older hyperopes and did not factor in age and pupil diameter. The aim of this study was to generate new algorithms that overcame these limitations. Clinical data were collected from the healthy right eyes of 873 white subjects aged between 20 and 70 years. Vision and refractive error were recorded along with age and pupil diameter. Re-examination of 34 subjects enabled the calculation of coefficients of repeatability. The study population was slightly biased towards females and included many contact lens wearers. Sex and contact lens wear were, therefore, recorded in order to determine whether these might influence the findings. In addition, iris colour and cylinder axis orientation were recorded as these might also be influential. A novel Blur Sensitivity Ratio (BSR) was derived by dividing vision (expressed as minimum angle of resolution) by refractive error (expressed as a scalar vector, U). Alteration of the scalar vector, to account for additional vision reduction due to oblique cylinder axes, was not found to be useful. Decision tree analysis showed that sex, contact lens wear, iris colour and cylinder axis orientation did not influence the BSR. The following algorithms arose from two stepwise multiple linear regressions: BSR (myopes) = 1.13 + (0.24 x pupil diameter) + (0.14 x U) BSR (hyperopes) = (0.11 x pupil diameter) + (0.03 x age) - 0.22 These algorithms together accounted for 84% of the observed variance. They showed that pupil diameter influenced vision in both forms of ametropia. They also showed the age-related decline in the ability to accommodate in order to overcome reduced vision in hyperopia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gastroesophageal reflux disease (GERD) is a common cause of chronic cough. For the diagnosis and treatment of GERD, it is desirable to quantify the temporal correlation between cough and reflux events. Cough episodes can be identified on esophageal manometric recordings as short-duration, rapid pressure rises. The present study aims at facilitating the detection of coughs by proposing an algorithm for the classification of cough events using manometric recordings. The algorithm detects cough episodes based on digital filtering, slope and amplitude analysis, and duration of the event. The algorithm has been tested on in vivo data acquired using a single-channel intra-esophageal manometric probe that comprises a miniature white-light interferometric fiber optic pressure sensor. Experimental results demonstrate the feasibility of using the proposed algorithm for identifying cough episodes based on real-time recordings using a single channel pressure catheter. The presented work can be integrated with commercial reflux pH/impedance probes to facilitate simultaneous 24-hour ambulatory monitoring of cough and reflux events, with the ultimate goal of quantifying the temporal correlation between the two types of events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the problem of concept generalization in decision-making systems where such features of real-world databases as large size, incompleteness and inconsistence of the stored information are taken into account. The methods of the rough set theory (like lower and upper approximations, positive regions and reducts) are used for the solving of this problem. The new discretization algorithm of the continuous attributes is proposed. It essentially increases an overall performance of generalization algorithms and can be applied to processing of real value attributes in large data tables. Also the search algorithm of the significant attributes combined with a stage of discretization is developed. It allows avoiding splitting of continuous domains of insignificant attributes into intervals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data processing services for Meteosat geostationary satellite are presented. Implemented services correspond to the different levels of remote-sensing data processing, including noise reduction at preprocessing level, cloud mask extraction at low-level and fractal dimension estimation at high-level. Cloud mask obtained as a result of Markovian segmentation of infrared data. To overcome high computation complexity of Markovian segmentation parallel algorithm is developed. Fractal dimension of Meteosat data estimated using fractional Brownian motion models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of finding the optimal join ordering executing a query to a relational database management system is a combinatorial optimization problem, which makes deterministic exhaustive solution search unacceptable for queries with a great number of joined relations. In this work an adaptive genetic algorithm with dynamic population size is proposed for optimizing large join queries. The performance of the algorithm is compared with that of several classical non-deterministic optimization algorithms. Experiments have been performed optimizing several random queries against a randomly generated data dictionary. The proposed adaptive genetic algorithm with probabilistic selection operator outperforms in a number of test runs the canonical genetic algorithm with Elitist selection as well as two common random search strategies and proves to be a viable alternative to existing non-deterministic optimization approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Constant increase of human population result in more and more people living in emergency dangerous regions. In order to protect them from possible emergencies we need effective solution for decision taking in case of emergencies, because lack of time for taking decision and possible lack of data. One among possible methods of taking such decisions is shown in this article.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major drawback of artificial neural networks is their black-box character. Therefore, the rule extraction algorithm is becoming more and more important in explaining the extracted rules from the neural networks. In this paper, we use a method that can be used for symbolic knowledge extraction from neural networks, once they have been trained with desired function. The basis of this method is the weights of the neural network trained. This method allows knowledge extraction from neural networks with continuous inputs and output as well as rule extraction. An example of the application is showed. This example is based on the extraction of average load demand of a power plant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the paper learning algorithm for adjusting weight coefficients of the Cascade Neo-Fuzzy Neural Network (CNFNN) in sequential mode is introduced. Concerned architecture has the similar structure with the Cascade-Correlation Learning Architecture proposed by S.E. Fahlman and C. Lebiere, but differs from it in type of artificial neurons. CNFNN consists of neo-fuzzy neurons, which can be adjusted using high-speed linear learning procedures. Proposed CNFNN is characterized by high learning rate, low size of learning sample and its operations can be described by fuzzy linguistic “if-then” rules providing “transparency” of received results, as compared with conventional neural networks. Using of online learning algorithm allows to process input data sequentially in real time mode.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AMS Subj. Classification: 62P10, 62H30, 68T01

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education in the Information Society", Plovdiv, May, 2013

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 91E45.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The correlated probit model is frequently used for multiple ordered data since it allows to incorporate seamlessly different correlation structures. The estimation of the probit model parameters based on direct maximization of the limited information maximum likelihood is a numerically intensive procedure. We propose an extension of the EM algorithm for obtaining maximum likelihood estimates for a correlated probit model for multiple ordinal outcomes. The algorithm is implemented in the free software environment for statistical computing and graphics R. We present two simulation studies to examine the performance of the developed algorithm. We apply the model to data on 121 women with cervical or endometrial cancer. Patients developed normal tissue reactions as a result of post-operative external beam pelvic radiotherapy. In this work we focused on modeling the effects of a genetic factor on early skin and early urogenital tissue reactions and on assessing the strength of association between the two types of reactions. We established that there was an association between skin reactions and polymorphism XRCC3 codon 241 (C>T) (rs861539) and that skin and urogenital reactions were positively correlated. ACM Computing Classification System (1998): G.3.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2014

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research evaluates pattern recognition techniques on a subclass of big data where the dimensionality of the input space (p) is much larger than the number of observations (n). Specifically, we evaluate massive gene expression microarray cancer data where the ratio κ is less than one. We explore the statistical and computational challenges inherent in these high dimensional low sample size (HDLSS) problems and present statistical machine learning methods used to tackle and circumvent these difficulties. Regularization and kernel algorithms were explored in this research using seven datasets where κ < 1. These techniques require special attention to tuning necessitating several extensions of cross-validation to be investigated to support better predictive performance. While no single algorithm was universally the best predictor, the regularization technique produced lower test errors in five of the seven datasets studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most machine-learning algorithms are designed for datasets with features of a single type whereas very little attention has been given to datasets with mixed-type features. We recently proposed a model to handle mixed types with a probabilistic latent variable formalism. This proposed model describes the data by type-specific distributions that are conditionally independent given the latent space and is called generalised generative topographic mapping (GGTM). It has often been observed that visualisations of high-dimensional datasets can be poor in the presence of noisy features. In this paper we therefore propose to extend the GGTM to estimate feature saliency values (GGTMFS) as an integrated part of the parameter learning process with an expectation-maximisation (EM) algorithm. The efficacy of the proposed GGTMFS model is demonstrated both for synthetic and real datasets.