939 resultados para Supervised classifier


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study aimed to evaluate the effects of carvedilol treatment and a regimen of supervised aerobic exercise training on quality of life and other clinical, echocardiographic, and biochemical variables in a group of client-owned dogs with chronic mitral valve disease (CMVD). Ten healthy dogs (control) and 36 CMVD dogs were studied, with the latter group divided into 3 subgroups. In addition to conventional treatment (benazepril, 0.3-0.5 mg/kg once a day, and digoxin, 0.0055 mg/kg twice daily), 13 dogs received exercise training (subgroup I; 10.3±2.1 years), 10 dogs received carvedilol (0.3 mg/kg twice daily) and exercise training (subgroup II; 10.8±1.7 years), and 13 dogs received only carvedilol (subgroup III; 10.9±2.1 years). All drugs were administered orally. Clinical, laboratory, and Doppler echocardiographic variables were evaluated at baseline and after 3 and 6 months. Exercise training was conducted from months 3-6. The mean speed rate during training increased for both subgroups I and II (ANOVA, P>0.001), indicating improvement in physical conditioning at the end of the exercise period. Quality of life and functional class was improved for all subgroups at the end of the study. The N-terminal pro-brain natriuretic peptide (NT-proBNP) level increased in subgroup I from baseline to 3 months, but remained stable after training introduction (from 3 to 6 months). For subgroups II and III, NT-proBNP levels remained stable during the entire study. No difference was observed for the other variables between the three evaluation periods. The combination of carvedilol or exercise training with conventional treatment in CMVD dogs led to improvements in quality of life and functional class. Therefore, light walking in CMVD dogs must be encouraged.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The growing population in cities increases the energy demand and affects the environment by increasing carbon emissions. Information and communications technology solutions which enable energy optimization are needed to address this growing energy demand in cities and to reduce carbon emissions. District heating systems optimize the energy production by reusing waste energy with combined heat and power plants. Forecasting the heat load demand in residential buildings assists in optimizing energy production and consumption in a district heating system. However, the presence of a large number of factors such as weather forecast, district heating operational parameters and user behavioural parameters, make heat load forecasting a challenging task. This thesis proposes a probabilistic machine learning model using a Naive Bayes classifier, to forecast the hourly heat load demand for three residential buildings in the city of Skellefteå, Sweden over a period of winter and spring seasons. The district heating data collected from the sensors equipped at the residential buildings in Skellefteå, is utilized to build the Bayesian network to forecast the heat load demand for horizons of 1, 2, 3, 6 and 24 hours. The proposed model is validated by using four cases to study the influence of various parameters on the heat load forecast by carrying out trace driven analysis in Weka and GeNIe. Results show that current heat load consumption and outdoor temperature forecast are the two parameters with most influence on the heat load forecast. The proposed model achieves average accuracies of 81.23 % and 76.74 % for a forecast horizon of 1 hour in the three buildings for winter and spring seasons respectively. The model also achieves an average accuracy of 77.97 % for three buildings across both seasons for the forecast horizon of 1 hour by utilizing only 10 % of the training data. The results indicate that even a simple model like Naive Bayes classifier can forecast the heat load demand by utilizing less training data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work investigates theoretical properties of symmetric and anti-symmetric kernels. First chapters give an overview of the theory of kernels used in supervised machine learning. Central focus is on the regularized least squares algorithm, which is motivated as a problem of function reconstruction through an abstract inverse problem. Brief review of reproducing kernel Hilbert spaces shows how kernels define an implicit hypothesis space with multiple equivalent characterizations and how this space may be modified by incorporating prior knowledge. Mathematical results of the abstract inverse problem, in particular spectral properties, pseudoinverse and regularization are recollected and then specialized to kernels. Symmetric and anti-symmetric kernels are applied in relation learning problems which incorporate prior knowledge that the relation is symmetric or anti-symmetric, respectively. Theoretical properties of these kernels are proved in a draft this thesis is based on and comprehensively referenced here. These proofs show that these kernels can be guaranteed to learn only symmetric or anti-symmetric relations, and they can learn any relations relative to the original kernel modified to learn only symmetric or anti-symmetric parts. Further results prove spectral properties of these kernels, central result being a simple inequality for the the trace of the estimator, also called the effective dimension. This quantity is used in learning bounds to guarantee smaller variance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Convolutional Neural Networks (CNN) have become the state-of-the-art methods on many large scale visual recognition tasks. For a lot of practical applications, CNN architectures have a restrictive requirement: A huge amount of labeled data are needed for training. The idea of generative pretraining is to obtain initial weights of the network by training the network in a completely unsupervised way and then fine-tune the weights for the task at hand using supervised learning. In this thesis, a general introduction to Deep Neural Networks and algorithms are given and these methods are applied to classification tasks of handwritten digits and natural images for developing unsupervised feature learning. The goal of this thesis is to find out if the effect of pretraining is damped by recent practical advances in optimization and regularization of CNN. The experimental results show that pretraining is still a substantial regularizer, however, not a necessary step in training Convolutional Neural Networks with rectified activations. On handwritten digits, the proposed pretraining model achieved a classification accuracy comparable to the state-of-the-art methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diabetic retinopathy, age-related macular degeneration and glaucoma are the leading causes of blindness worldwide. Automatic methods for diagnosis exist, but their performance is limited by the quality of the data. Spectral retinal images provide a significantly better representation of the colour information than common grayscale or red-green-blue retinal imaging, having the potential to improve the performance of automatic diagnosis methods. This work studies the image processing techniques required for composing spectral retinal images with accurate reflection spectra, including wavelength channel image registration, spectral and spatial calibration, illumination correction, and the estimation of depth information from image disparities. The composition of a spectral retinal image database of patients with diabetic retinopathy is described. The database includes gold standards for a number of pathologies and retinal structures, marked by two expert ophthalmologists. The diagnostic applications of the reflectance spectra are studied using supervised classifiers for lesion detection. In addition, inversion of a model of light transport is used to estimate histological parameters from the reflectance spectra. Experimental results suggest that the methods for composing, calibrating and postprocessing spectral images presented in this work can be used to improve the quality of the spectral data. The experiments on the direct and indirect use of the data show the diagnostic potential of spectral retinal data over standard retinal images. The use of spectral data could improve automatic and semi-automated diagnostics for the screening of retinal diseases, for the quantitative detection of retinal changes for follow-up, clinically relevant end-points for clinical studies and development of new therapeutic modalities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with the nature of ignorance as it was interpreted in the Upani~adic tradition, specifically in Advaita Vedanta, and in early and Mahayana Buddhism , e specially in the Madhyamika school of Buddhism. The approach i s a historical and comparative one. It examines the early thoughts of both the upanis.a ds and Buddhism abou t avidya (ignorance), shows how the notion was treated by the more speculative and philosphically oriented schools which base d themselves on the e arly works, and sees how their views differ. The thesis will show that the Vedinta tended to treat avidya as a topic for metaphysical s peculation as t he s chool developed, drifting from its initial e xistential concerns, while the Madhyamika remained in contact with the e xistential concerns evident in the first discourses of the Buddha. The word "notion" has been chosen for use in referring t o avidya, even though it may have non-intellectual and emotional connotations, to avoid more popular a lternatives such as "concept" or "idea". In neither the Upani,ads, Advaita Vedanta, or Buddhism is ignorance merely a concept or an idea. Only in a secondary sense, in texts and speech , does it become one. Avidya has more to do with the lived situation in which man finds himself, with the subjectobject separation in which he f eels he exists, than with i i i intel lect ual constr ucts . Western thought has begun to r ealize the same with concerns such as being in modern ontology, and has chosen to speak about i t i n terms of the question of being . Avidya, however, i s not a 'question' . If q ue stions we r e to be put regarding the nature of a vidya , they would be more of t he sort "What is not avidya?", though e ven here l anguage bestows a status t o i t which avidya does not have. In considering a work of the Eastern tradition, we f ace t he danger of imposing Western concepts on it. Granted t hat avidya is customari ly r endered i n English as ignorance, the ways i n which the East and West view i gno rance di f f er. Pedagogically , the European cultures, grounded in the ancient Greek culture, view ignorance as a l ack or an emptiness. A child is i gnorant o f certain t hings and the purpose o f f ormal education , in f act if not in theory, is to fill him with enough knowledge so that he can cope wit h t he complexities and the e xpectations of s ociety. On another level, we feel t hat study and research will l ead t o the discovery o f solutions, which we now lack , for problems now defying solut i on . The East, on the o t her hand, sees avidya in a d i fferent light.Ignorance isn't a lack, but a presence. Religious and philosophical l iterature directs its efforts not towards acquiring something new, but at removing t.he ideas and opinions that individuals have formed about themselves and the world. When that is fully accomplished, say the sages , t hen Wisdom, which has been obscured by those opinions, will present itself. Nothing new has to be learned, t hough we do have t o 'learn' that much. The growing interest in t he West with Eastern religions and philosophies may, in time, influence our theoretical and practical approaches to education and learning, not only in the established educati onal institutions, but in religious , p sychological, and spiritual activities as well. However, the requirements o f this thesis do no t permit a formulation of revolutionary method or a call to action. It focuses instead on the textual arguments which attempt to convince readers that t he world in which they take themselves to exist is not, in essence, real, on the ways i n which the l imitations of language are disclosed, and on the provisional and limited schemes that are built up to help students see through their ignorance. The metaphysic s are provisional because they act only as spurs and guides. Both the Upanisadic and Buddhist traditions that will be dealt with here stress that language constantly fails to encompass the Real. So even terms s uch as 'the Real', 'Absolute', etc., serve only to lead to a transcendent experience . The sections dealing with the Upanisads and Advaita Vedanta show some of the historical evolution of the notion of avidya, how it was dealt with as maya , and the q uestions that arose as t o its locus. With Gau?apada we see the beginnings of a more abstract treatment of the topic, and , the influence of Buddhism. Though Sankhara' S interest was primarily directed towards constructing a philosophy to help others attain mok~a ( l iberation), he too introduced t echnica l t e rminology not found in the works of his predecessors. His work is impressive , but areas of it are incomplete. Numbers of his followers tried to complete the systematic presentation of his insi ghts . Their work focuses on expl anat i ons of adhyasa (superimposition ) , t he locus and object of ignorance , and the means by which Brahman takes itself to be the jiva and the world. The section on early Buddhism examines avidya in the context o f the four truths, together with dubkha (suffering), the r ole it p l ays in t he chain of dependent c ausation , a nd t he p r oblems that arise with t he doctrine of anatman. With t he doct rines of e arly Buddhism as a base, the Madhyamika elaborated questions that the Buddha had said t e nded not t o edi f ication. One of these had to do with own - being or svabhava. Thi s serves a s a centr e around which a discussion o f i gnorance unfolds, both i ndividual and coll ective ignorance. There follows a treatment of the cessation of ignorance as it is discussed within this school . The final secti on tries to present t he similarities and differences i n the natures o f ignorance i n t he two traditions and discusses the factors responsible for t hem . ACKNOWLEDGEMENTS I would like to thank Dr. Sinha for the time spent II and suggestions made on the section dealing with Sankara and the Advait.a Vedanta oommentators, and Dr. Sprung, who supervised, direoted, corrected and encouraged the thesis as a whole, but especially the section on Madhyamika, and the final comparison.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bioinformatics applies computers to problems in molecular biology. Previous research has not addressed edit metric decoders. Decoders for quaternary edit metric codes are finding use in bioinformatics problems with applications to DNA. By using side effect machines we hope to be able to provide efficient decoding algorithms for this open problem. Two ideas for decoding algorithms are presented and examined. Both decoders use Side Effect Machines(SEMs) which are generalizations of finite state automata. Single Classifier Machines(SCMs) use a single side effect machine to classify all words within a code. Locking Side Effect Machines(LSEMs) use multiple side effect machines to create a tree structure of subclassification. The goal is to examine these techniques and provide new decoders for existing codes. Presented are ideas for best practices for the creation of these two types of new edit metric decoders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Older adults represent the most sedentary segment of the adult population, and thus it is critical to investigate factors that influence exercise behaviour for this age group. The purpose of this study was to examine the influence of a general exercise program, incorporating cardiovascular, strength, flexibility, and balance components, on task selfefficacy and SPA in older adult men and women. Participants (n=114, Mage = 67 years) were recruited from the Niagara region and randomly assigned to a 12-week supervised exercise program or a wait-list control. Task self-efficacy and SPA measures were taken at baseline and program end. The present study found that task self-efficacy was a significant predictor of leisure time physical activity for older adults. In addition, change in task self-efficacy was a significant predictor of change in SPA. The findings of this study suggest that sources of task self-efficacy should be considered for exercise interventions targeting older adults.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Remote sensing techniques involving hyperspectral imagery have applications in a number of sciences that study some aspects of the surface of the planet. The analysis of hyperspectral images is complex because of the large amount of information involved and the noise within that data. Investigating images with regard to identify minerals, rocks, vegetation and other materials is an application of hyperspectral remote sensing in the earth sciences. This thesis evaluates the performance of two classification and clustering techniques on hyperspectral images for mineral identification. Support Vector Machines (SVM) and Self-Organizing Maps (SOM) are applied as classification and clustering techniques, respectively. Principal Component Analysis (PCA) is used to prepare the data to be analyzed. The purpose of using PCA is to reduce the amount of data that needs to be processed by identifying the most important components within the data. A well-studied dataset from Cuprite, Nevada and a dataset of more complex data from Baffin Island were used to assess the performance of these techniques. The main goal of this research study is to evaluate the advantage of training a classifier based on a small amount of data compared to an unsupervised method. Determining the effect of feature extraction on the accuracy of the clustering and classification method is another goal of this research. This thesis concludes that using PCA increases the learning accuracy, and especially so in classification. SVM classifies Cuprite data with a high precision and the SOM challenges SVM on datasets with high level of noise (like Baffin Island).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Genetic Programming (GP) is a widely used methodology for solving various computational problems. GP's problem solving ability is usually hindered by its long execution times. In this thesis, GP is applied toward real-time computer vision. In particular, object classification and tracking using a parallel GP system is discussed. First, a study of suitable GP languages for object classification is presented. Two main GP approaches for visual pattern classification, namely the block-classifiers and the pixel-classifiers, were studied. Results showed that the pixel-classifiers generally performed better. Using these results, a suitable language was selected for the real-time implementation. Synthetic video data was used in the experiments. The goal of the experiments was to evolve a unique classifier for each texture pattern that existed in the video. The experiments revealed that the system was capable of correctly tracking the textures in the video. The performance of the system was on-par with real-time requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and deterministic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel metaheuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS metaheuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and determinis- tic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel meta–heuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS meta–heuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of most clustering algorithms is to find the optimal number of clusters (i.e. fewest number of clusters). However, analysis of molecular conformations of biological macromolecules obtained from computer simulations may benefit from a larger array of clusters. The Self-Organizing Map (SOM) clustering method has the advantage of generating large numbers of clusters, but often gives ambiguous results. In this work, SOMs have been shown to be reproducible when the same conformational dataset is independently clustered multiple times (~100), with the help of the Cramérs V-index (C_v). The ability of C_v to determine which SOMs are reproduced is generalizable across different SOM source codes. The conformational ensembles produced from MD (molecular dynamics) and REMD (replica exchange molecular dynamics) simulations of the penta peptide Met-enkephalin (MET) and the 34 amino acid protein human Parathyroid Hormone (hPTH) were used to evaluate SOM reproducibility. The training length for the SOM has a huge impact on the reproducibility. Analysis of MET conformational data definitively determined that toroidal SOMs cluster data better than bordered maps due to the fact that toroidal maps do not have an edge effect. For the source code from MATLAB, it was determined that the learning rate function should be LINEAR with an initial learning rate factor of 0.05 and the SOM should be trained by a sequential algorithm. The trained SOMs can be used as a supervised classification for another dataset. The toroidal 10×10 hexagonal SOMs produced from the MATLAB program for hPTH conformational data produced three sets of reproducible clusters (27%, 15%, and 13% of 100 independent runs) which find similar partitionings to those of smaller 6×6 SOMs. The χ^2 values produced as part of the C_v calculation were used to locate clusters with identical conformational memberships on independently trained SOMs, even those with different dimensions. The χ^2 values could relate the different SOM partitionings to each other.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

À la lecture de l'article 2365 c.c.Q., le créancier et la caution ne peuvent pas percevoir les droits et les libertés que ce texte concrétise à leur encontre ou à leur profit. Pour pallier ce problème, les auteurs et la jurisprudence ont alors laissé place à leur imagination afin de tenter de classifier cette disposition à l'intérieur d'institutions juridiques éprouvées, le tout en vue de démythifier le contenu de la règle de droit. Pour notre part, nous considérons que l'exception de non-subrogation est une notion originale en soi, qui trouve sa source à l'intérieur même de son institution. La thèse que nous soutenons est que l'exception de non-subrogation, mode de libération qui a pour mission de combattre le comportement opportuniste, cristallise l'obligation de bonne foi en imposant implicitement au créancier une obligation de bonne subrogation. Tout manquement du créancier à cette obligation a comme conséquence de rendre le droit de créance du créancier irrecevable à l'égard de la caution devant les tribunaux. Ce précepte éclaircit le contexte de l'article 2365 C.c.Q. et, par le fait même, il permet de délimiter le contour de son domaine et de préciser ses conditions d'application. L'exception de non-subrogation est un mécanisme juridique qui date de l'époque romaine. Elle est maintenant intégrée dans presque tous les systèmes juridiques du monde, tant en droit civil qu'en common law. Dans la législation québécoise, elle s'est cristallisée à l'article 2365 C.c.Q. Il s'agit d'une disposition d'ordre public qui ne peut être invoquée que par la caution. Son application dépend du cumul de quatre conditions: 1) le fait du créancier; 2) la perte d'un droit subrogatoire; 3) le préjudice de la caution; 4) le lien causal entre les trois derniers éléments. Lorsque ces quatre conditions sont remplies, la caution est libérée de son engagement dans la mesure du préjudice qu'elle subit.