839 resultados para Polynomial Classifier


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ce mémoire porte sur la simulation d'intervalles de crédibilité simultanés dans un contexte bayésien. Dans un premier temps, nous nous intéresserons à des données de précipitations et des fonctions basées sur ces données : la fonction de répartition empirique et la période de retour, une fonction non linéaire de la fonction de répartition. Nous exposerons différentes méthodes déjà connues pour obtenir des intervalles de confiance simultanés sur ces fonctions à l'aide d'une base polynomiale et nous présenterons une méthode de simulation d'intervalles de crédibilité simultanés. Nous nous placerons ensuite dans un contexte bayésien en explorant différents modèles de densité a priori. Pour le modèle le plus complexe, nous aurons besoin d'utiliser la simulation Monte-Carlo pour obtenir les intervalles de crédibilité simultanés a posteriori. Finalement, nous utiliserons une base non linéaire faisant appel à la transformation angulaire et aux splines monotones pour obtenir un intervalle de crédibilité simultané valide pour la période de retour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dans ce mémoire, on étudie la désintégration d’un faux vide, c’est-à-dire un vide qui est un minimum relatif d’un potentiel scalaire par effet tunnel. Des défauts topologiques en 1+1 dimension, appelés kinks, apparaissent lorsque le potentiel possède un minimum qui brise spontanément une symétrie discrète. En 3+1 dimensions, ces kinks deviennent des murs de domaine. Ils apparaissent par exemple dans les matériaux magnétiques en matière condensée. Un modèle à deux champs scalaires couplés sera étudié ainsi que les solutions aux équations du mouvement qui en découlent. Ce faisant, on analysera comment l’existence et l’énergie des solutions statiques dépend des paramètres du modèle. Un balayage numérique de l’espace des paramètres révèle que les solutions stables se trouvent entre les zones de dissociation, des régions dans l’espace des paramètres où les solutions stables n’existent plus. Le comportement des solutions instables dans les zones de dissociation peut être très différent selon la zone de dissociation dans laquelle une solution se trouve. Le potentiel consiste, dans un premier temps, en un polynôme d’ordre six, auquel on y rajoute, dans un deuxième temps, un polynôme quartique multiplié par un terme de couplage, et est choisi tel que les extrémités du kink soient à des faux vides distincts. Le taux de désintégration a été estimé par une approximation semi-classique pour montrer l’impact des défauts topologiques sur la stabilité du faux vide. Le projet consiste à déterminer les conditions qui permettent aux kinks de catalyser la désintégration du faux vide. Il appert qu’on a trouvé une expression pour déterminer la densité critique de kinks et qu’on comprend ce qui se passe avec la plupart des termes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIF : Déterminer les principales solutions qui facilitent la pratique optimale des médecins dans le traitement de l’asthme, incluant la prescription d’un médicament de contrôle à long terme et l’utilisation de plans d’action écrits. MÉTHODOLOGIE: Des entrevues individuelles semi-structurées ont été menées avec des médecins de différentes spécialités (médecins de famille, pédiatres, urgentologues, pneumologues et allergologues). Ces entrevues ont été transcrites puis analysées qualitativement de manière indépendante par deux chercheures qualifiées. RÉSULTATS : Quarante-deux médecins ont été interviewés. Un total de 867 facilitateurs et solutions ont été exprimés, répondant à trois de leurs besoins: (1) avoir du soutien dans la prestation de soins optimaux, (2) être habileté à aider et motiver les patients à suivre leurs recommandations et (3) avoir l’opportunité d’offrir des services efficients. À partir de ces données, une taxonomie de facilitateurs et de solutions comprenant dix catégories a également été développée. CONCLUSION : Les médecins ont proposé une multitude de facilitateurs et de solutions pour soutenir la pratique optimale. Ils varient essentiellement selon la spécialité et le comportement visé (prescription de médicaments de contrôle à long terme, utilisation de plans d’autogestion écrits et la gestion générale de l’asthme). Cela fait ressortir l’importance d’effectuer le choix des interventions en étroite collaboration avec les utilisateurs de connaissances afin d’obtenir des solutions qui soient perçues comme faisables et applicables, ayant ainsi potentiellement plus de chances de mener à un changement de pratique. La nouvelle taxonomie offre la possibilité d’utiliser un langage commun pour classifier les facilitateurs et les solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’environnement façonne la physiologie, la morphologie et le comportement des organismes par l’entremise de processus écologiques et évolutifs complexes et multidimensionnels. Le succès reproducteur des animaux est déterminé par la valeur adaptative d’un phénotype dans un environnement en modification constante selon une échelle temporelle d’une à plusieurs générations. De plus, les phénotypes sont façonnés par l’environnement, ce qui entraine des modifications adaptatives des stratégies de reproduction tout en imposant des contraintes. Dans cette thèse, considérant des punaises et leurs parasitoïdes comme organismes modèles, j’ai investigué comment plusieurs types de plasticité peuvent interagir pour influencer la valeur adaptative, et comment la plasticité des stratégies de reproduction répond à plusieurs composantes des changements environnementaux (qualité de l’hôte, radiation ultraviolette, température, invasion biologique). Premièrement, j’ai comparé la réponse comportementale et de traits d’histoire de vie à la variation de taille corporelle chez le parasitoïde Telenomus podisi Ashmead (Hymenoptera : Platygastridae), démontrant que les normes de réaction des comportements étaient plus souvent positives que celles des traits d’histoires de vie. Ensuite, j’ai démontré que la punaise prédatrice Podisus maculiventris Say (Hemiptera : Pentatomidae) peut contrôler la couleur de ses œufs, et que la pigmentation des œufs protège les embryons du rayonnement ultraviolet; une composante d’une stratégie complexe de ponte qui a évoluée en réponse à une multitude de facteurs environnementaux. Puis, j’ai testé comment le stress thermique affectait la dynamique de la mémoire du parasitoïde Trissolcus basalis (Wollaston) (Hymenoptera : Platygastridae) lors de l’apprentissage de la fiabilité des traces chimiques laissées par son hôte. Ces expériences ont révélé que des températures hautes et basses prévenaient l’oubli, affectant ainsi l’allocation du temps passé par les parasitoïdes dans des agrégats d’hôtes contenant des traces chimiques. J’ai aussi développé un cadre théorique général pour classifier les effets de la température sur l’ensemble des aspects comportementaux des ectothermes, distinguant les contraintes des adaptations. Finalement, j’ai testé l’habileté d’un parasitoïde indigène (T. podisi) à exploiter les œufs d’un nouveau ravageur invasif en agriculture, Halyomorpha halys Stål (Hemiptera : Pentatomidae). Les résultats ont montré que T. podisi attaque les œufs de H. halys, mais qu’il ne peut s’y développer, indiquant que le ravageur invasif s’avère un « piège évolutif » pour ce parasitoïde. Cela pourrait indirectement bénéficier aux espèces indigènes de punaises en agissant comme un puits écologique de ressources (œufs) et de temps pour le parasitoïde. Ces résultats ont des implications importantes sur la réponse des insectes, incluant ceux impliqués dans les programmes de lutte biologique, face aux changements environnementaux.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective To determine scoliosis curve types using non invasive surface acquisition, without prior knowledge from X-ray data. Methods Classification of scoliosis deformities according to curve type is used in the clinical management of scoliotic patients. In this work, we propose a robust system that can determine the scoliosis curve type from non invasive acquisition of the 3D back surface of the patients. The 3D image of the surface of the trunk is divided into patches and local geometric descriptors characterizing the back surface are computed from each patch and constitute the features. We reduce the dimensionality by using principal component analysis and retain 53 components using an overlap criterion combined with the total variance in the observed variables. In this work, a multi-class classifier is built with least-squares support vector machines (LS-SVM). The original LS-SVM formulation was modified by weighting the positive and negative samples differently and a new kernel was designed in order to achieve a robust classifier. The proposed system is validated using data from 165 patients with different scoliosis curve types. The results of our non invasive classification were compared with those obtained by an expert using X-ray images. Results The average rate of successful classification was computed using a leave-one-out cross-validation procedure. The overall accuracy of the system was 95%. As for the correct classification rates per class, we obtained 96%, 84% and 97% for the thoracic, double major and lumbar/thoracolumbar curve types, respectively. Conclusion This study shows that it is possible to find a relationship between the internal deformity and the back surface deformity in scoliosis with machine learning methods. The proposed system uses non invasive surface acquisition, which is safe for the patient as it involves no radiation. Also, the design of a specific kernel improved classification performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a new methodology for the prediction of scoliosis curve types from non invasive acquisitions of the back surface of the trunk is proposed. One hundred and fifty-nine scoliosis patients had their back surface acquired in 3D using an optical digitizer. Each surface is then characterized by 45 local measurements of the back surface rotation. Using a semi-supervised algorithm, the classifier is trained with only 32 labeled and 58 unlabeled data. Tested on 69 new samples, the classifier succeeded in classifying correctly 87.0% of the data. After reducing the number of labeled training samples to 12, the behavior of the resulting classifier tends to be similar to the reference case where the classifier is trained only with the maximum number of available labeled data. Moreover, the addition of unlabeled data guided the classifier towards more generalizable boundaries between the classes. Those results provide a proof of feasibility for using a semi-supervised learning algorithm to train a classifier for the prediction of a scoliosis curve type, when only a few training data are labeled. This constitutes a promising clinical finding since it will allow the diagnosis and the follow-up of scoliotic deformities without exposing the patient to X-ray radiations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scoliosis treatment strategy is generally chosen according to the severity and type of the spinal curve. Currently, the curve type is determined from X-rays whose acquisition can be harmful for the patient. We propose in this paper a system that can predict the scoliosis curve type based on the analysis of the surface of the trunk. The latter is acquired and reconstructed in 3D using a non invasive multi-head digitizing system. The deformity is described by the back surface rotation, measured on several cross-sections of the trunk. A classifier composed of three support vector machines was trained and tested using the data of 97 patients with scoliosis. A prediction rate of 72.2% was obtained, showing that the use of the trunk surface for a high-level scoliosis classification is feasible and promising.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cast Ai-Si alloys are widely used in the automotive, aerospace and general engineering industries due to their excellent combination of properties such as good castability, low coefficient of thermal expansion, high strength-to-weight ratio and good corrosion resistance. The present investigation is on the influence of alloying additions on the structure and properties of Ai-7Si-0.3Mg alloy. The primary objective of this present investigation is to study these beneficial effects of calcium on the structure and properties of Ai-7Si-0.3Mg-xFe alloys. The second objective of this work is to study the effects of Mn,Be and Sr addition as Fe neutralizers and also to study the interaction of Mn,Be,Sr and Ca in Ai-7Si-0.3Mg-xFe alloys. In this study the duel beneficial effects of Ca viz;modification and Fe-neutralization, comparison of the effects of Ca and Sr with common Fe neutralizers. The casting have been characterized with respect to their microstructure, %porosity and electrical conductivity, solidification behaviour and mechanical properties. One of the interesting observations in the present work is that a low level of calcium reduces the porosity compared to the untreated alloy. However higher level of calcium addition lead to higher porosity in the casting. An empirical analysis carried out for comparing the results of the present work with those of the other researchers on the effect of increasing iron content on UTS and % elongation of Ai-Si-Mg and Ai-Si-Cu alloys has shown a linear and an inverse first order polynomial relationships respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many finite elements used in structural analysis possess deficiencies like shear locking, incompressibility locking, poor stress predictions within the element domain, violent stress oscillation, poor convergence etc. An approach that can probably overcome many of these problems would be to consider elements in which the assumed displacement functions satisfy the equations of stress field equilibrium. In this method, the finite element will not only have nodal equilibrium of forces, but also have inner stress field equilibrium. The displacement interpolation functions inside each individual element are truncated polynomial solutions of differential equations. Such elements are likely to give better solutions than the existing elements.In this thesis, a new family of finite elements in which the assumed displacement function satisfies the differential equations of stress field equilibrium is proposed. A general procedure for constructing the displacement functions and use of these functions in the generation of elemental stiffness matrices has been developed. The approach to develop field equilibrium elements is quite general and various elements to analyse different types of structures can be formulated from corresponding stress field equilibrium equations. Using this procedure, a nine node quadrilateral element SFCNQ for plane stress analysis, a sixteen node solid element SFCSS for three dimensional stress analysis and a four node quadrilateral element SFCFP for plate bending problems have been formulated.For implementing these elements, computer programs based on modular concepts have been developed. Numerical investigations on the performance of these elements have been carried out through standard test problems for validation purpose. Comparisons involving theoretical closed form solutions as well as results obtained with existing finite elements have also been made. It is found that the new elements perform well in all the situations considered. Solutions in all the cases converge correctly to the exact values. In many cases, convergence is faster when compared with other existing finite elements. The behaviour of field consistent elements would definitely generate a great deal of interest amongst the users of the finite elements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data mining is one of the hottest research areas nowadays as it has got wide variety of applications in common man’s life to make the world a better place to live. It is all about finding interesting hidden patterns in a huge history data base. As an example, from a sales data base, one can find an interesting pattern like “people who buy magazines tend to buy news papers also” using data mining. Now in the sales point of view the advantage is that one can place these things together in the shop to increase sales. In this research work, data mining is effectively applied to a domain called placement chance prediction, since taking wise career decision is so crucial for anybody for sure. In India technical manpower analysis is carried out by an organization named National Technical Manpower Information System (NTMIS), established in 1983-84 by India's Ministry of Education & Culture. The NTMIS comprises of a lead centre in the IAMR, New Delhi, and 21 nodal centres located at different parts of the country. The Kerala State Nodal Centre is located at Cochin University of Science and Technology. In Nodal Centre, they collect placement information by sending postal questionnaire to passed out students on a regular basis. From this raw data available in the nodal centre, a history data base was prepared. Each record in this data base includes entrance rank ranges, reservation, Sector, Sex, and a particular engineering. From each such combination of attributes from the history data base of student records, corresponding placement chances is computed and stored in the history data base. From this data, various popular data mining models are built and tested. These models can be used to predict the most suitable branch for a particular new student with one of the above combination of criteria. Also a detailed performance comparison of the various data mining models is done.This research work proposes to use a combination of data mining models namely a hybrid stacking ensemble for better predictions. A strategy to predict the overall absorption rate for various branches as well as the time it takes for all the students of a particular branch to get placed etc are also proposed. Finally, this research work puts forward a new data mining algorithm namely C 4.5 * stat for numeric data sets which has been proved to have competent accuracy over standard benchmarking data sets called UCI data sets. It also proposes an optimization strategy called parameter tuning to improve the standard C 4.5 algorithm. As a summary this research work passes through all four dimensions for a typical data mining research work, namely application to a domain, development of classifier models, optimization and ensemble methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An attempt is made by the researcher to establish a theory of discrete functions in the complex plane. Classical analysis q-basic theory, monodiffric theory, preholomorphic theory and q-analytic theory have been utilised to develop concepts like differentiation, integration and special functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Median filtering is a simple digital non—linear signal smoothing operation in which median of the samples in a sliding window replaces the sample at the middle of the window. The resulting filtered sequence tends to follow polynomial trends in the original sample sequence. Median filter preserves signal edges while filtering out impulses. Due to this property, median filtering is finding applications in many areas of image and speech processing. Though median filtering is simple to realise digitally, its properties are not easily analysed with standard analysis techniques,

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biometrics deals with the physiological and behavioral characteristics of an individual to establish identity. Fingerprint based authentication is the most advanced biometric authentication technology. The minutiae based fingerprint identification method offer reasonable identification rate. The feature minutiae map consists of about 70-100 minutia points and matching accuracy is dropping down while the size of database is growing up. Hence it is inevitable to make the size of the fingerprint feature code to be as smaller as possible so that identification may be much easier. In this research, a novel global singularity based fingerprint representation is proposed. Fingerprint baseline, which is the line between distal and intermediate phalangeal joint line in the fingerprint, is taken as the reference line. A polygon is formed with the singularities and the fingerprint baseline. The feature vectors are the polygonal angle, sides, area, type and the ridge counts in between the singularities. 100% recognition rate is achieved in this method. The method is compared with the conventional minutiae based recognition method in terms of computation time, receiver operator characteristics (ROC) and the feature vector length. Speech is a behavioural biometric modality and can be used for identification of a speaker. In this work, MFCC of text dependant speeches are computed and clustered using k-means algorithm. A backpropagation based Artificial Neural Network is trained to identify the clustered speech code. The performance of the neural network classifier is compared with the VQ based Euclidean minimum classifier. Biometric systems that use a single modality are usually affected by problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks. Multifinger feature level fusion based fingerprint recognition is developed and the performances are measured in terms of the ROC curve. Score level fusion of fingerprint and speech based recognition system is done and 100% accuracy is achieved for a considerable range of matching threshold

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speech is the most natural means of communication among human beings and speech processing and recognition are intensive areas of research for the last five decades. Since speech recognition is a pattern recognition problem, classification is an important part of any speech recognition system. In this work, a speech recognition system is developed for recognizing speaker independent spoken digits in Malayalam. Voice signals are sampled directly from the microphone. The proposed method is implemented for 1000 speakers uttering 10 digits each. Since the speech signals are affected by background noise, the signals are tuned by removing the noise from it using wavelet denoising method based on Soft Thresholding. Here, the features from the signals are extracted using Discrete Wavelet Transforms (DWT) because they are well suitable for processing non-stationary signals like speech. This is due to their multi- resolutional, multi-scale analysis characteristics. Speech recognition is a multiclass classification problem. So, the feature vector set obtained are classified using three classifiers namely, Artificial Neural Networks (ANN), Support Vector Machines (SVM) and Naive Bayes classifiers which are capable of handling multiclasses. During classification stage, the input feature vector data is trained using information relating to known patterns and then they are tested using the test data set. The performances of all these classifiers are evaluated based on recognition accuracy. All the three methods produced good recognition accuracy. DWT and ANN produced a recognition accuracy of 89%, SVM and DWT combination produced an accuracy of 86.6% and Naive Bayes and DWT combination produced an accuracy of 83.5%. ANN is found to be better among the three methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Treating e-mail filtering as a binary text classification problem, researchers have applied several statistical learning algorithms to email corpora with promising results. This paper examines the performance of a Naive Bayes classifier using different approaches to feature selection and tokenization on different email corpora