774 resultados para Proficiency-based training
Resumo:
Cette thèse porte sur une classe d'algorithmes d'apprentissage appelés architectures profondes. Il existe des résultats qui indiquent que les représentations peu profondes et locales ne sont pas suffisantes pour la modélisation des fonctions comportant plusieurs facteurs de variation. Nous sommes particulièrement intéressés par ce genre de données car nous espérons qu'un agent intelligent sera en mesure d'apprendre à les modéliser automatiquement; l'hypothèse est que les architectures profondes sont mieux adaptées pour les modéliser. Les travaux de Hinton (2006) furent une véritable percée, car l'idée d'utiliser un algorithme d'apprentissage non-supervisé, les machines de Boltzmann restreintes, pour l'initialisation des poids d'un réseau de neurones supervisé a été cruciale pour entraîner l'architecture profonde la plus populaire, soit les réseaux de neurones artificiels avec des poids totalement connectés. Cette idée a été reprise et reproduite avec succès dans plusieurs contextes et avec une variété de modèles. Dans le cadre de cette thèse, nous considérons les architectures profondes comme des biais inductifs. Ces biais sont représentés non seulement par les modèles eux-mêmes, mais aussi par les méthodes d'entraînement qui sont souvent utilisés en conjonction avec ceux-ci. Nous désirons définir les raisons pour lesquelles cette classe de fonctions généralise bien, les situations auxquelles ces fonctions pourront être appliquées, ainsi que les descriptions qualitatives de telles fonctions. L'objectif de cette thèse est d'obtenir une meilleure compréhension du succès des architectures profondes. Dans le premier article, nous testons la concordance entre nos intuitions---que les réseaux profonds sont nécessaires pour mieux apprendre avec des données comportant plusieurs facteurs de variation---et les résultats empiriques. Le second article est une étude approfondie de la question: pourquoi l'apprentissage non-supervisé aide à mieux généraliser dans un réseau profond? Nous explorons et évaluons plusieurs hypothèses tentant d'élucider le fonctionnement de ces modèles. Finalement, le troisième article cherche à définir de façon qualitative les fonctions modélisées par un réseau profond. Ces visualisations facilitent l'interprétation des représentations et invariances modélisées par une architecture profonde.
Resumo:
La fibrillation auriculaire (FA) est une arythmie touchant les oreillettes. En FA, la contraction auriculaire est rapide et irrégulière. Le remplissage des ventricules devient incomplet, ce qui réduit le débit cardiaque. La FA peut entraîner des palpitations, des évanouissements, des douleurs thoraciques ou l’insuffisance cardiaque. Elle augmente aussi le risque d'accident vasculaire. Le pontage coronarien est une intervention chirurgicale réalisée pour restaurer le flux sanguin dans les cas de maladie coronarienne sévère. 10% à 65% des patients qui n'ont jamais subi de FA, en sont victime le plus souvent lors du deuxième ou troisième jour postopératoire. La FA est particulièrement fréquente après une chirurgie de la valve mitrale, survenant alors dans environ 64% des patients. L'apparition de la FA postopératoire est associée à une augmentation de la morbidité, de la durée et des coûts d'hospitalisation. Les mécanismes responsables de la FA postopératoire ne sont pas bien compris. L'identification des patients à haut risque de FA après un pontage coronarien serait utile pour sa prévention. Le présent projet est basé sur l'analyse d’électrogrammes cardiaques enregistrées chez les patients après pontage un aorte-coronaire. Le premier objectif de la recherche est d'étudier si les enregistrements affichent des changements typiques avant l'apparition de la FA. Le deuxième objectif est d'identifier des facteurs prédictifs permettant d’identifier les patients qui vont développer une FA. Les enregistrements ont été réalisés par l'équipe du Dr Pierre Pagé sur 137 patients traités par pontage coronarien. Trois électrodes unipolaires ont été suturées sur l'épicarde des oreillettes pour enregistrer en continu pendant les 4 premiers jours postopératoires. La première tâche était de développer un algorithme pour détecter et distinguer les activations auriculaires et ventriculaires sur chaque canal, et pour combiner les activations des trois canaux appartenant à un même événement cardiaque. L'algorithme a été développé et optimisé sur un premier ensemble de marqueurs, et sa performance évaluée sur un second ensemble. Un logiciel de validation a été développé pour préparer ces deux ensembles et pour corriger les détections sur tous les enregistrements qui ont été utilisés plus tard dans les analyses. Il a été complété par des outils pour former, étiqueter et valider les battements sinusaux normaux, les activations auriculaires et ventriculaires prématurées (PAA, PVA), ainsi que les épisodes d'arythmie. Les données cliniques préopératoires ont ensuite été analysées pour établir le risque préopératoire de FA. L’âge, le niveau de créatinine sérique et un diagnostic d'infarctus du myocarde se sont révélés être les plus importants facteurs de prédiction. Bien que le niveau du risque préopératoire puisse dans une certaine mesure prédire qui développera la FA, il n'était pas corrélé avec le temps de l'apparition de la FA postopératoire. Pour l'ensemble des patients ayant eu au moins un épisode de FA d’une durée de 10 minutes ou plus, les deux heures précédant la première FA prolongée ont été analysées. Cette première FA prolongée était toujours déclenchée par un PAA dont l’origine était le plus souvent sur l'oreillette gauche. Cependant, au cours des deux heures pré-FA, la distribution des PAA et de la fraction de ceux-ci provenant de l'oreillette gauche était large et inhomogène parmi les patients. Le nombre de PAA, la durée des arythmies transitoires, le rythme cardiaque sinusal, la portion basse fréquence de la variabilité du rythme cardiaque (LF portion) montraient des changements significatifs dans la dernière heure avant le début de la FA. La dernière étape consistait à comparer les patients avec et sans FA prolongée pour trouver des facteurs permettant de discriminer les deux groupes. Cinq types de modèles de régression logistique ont été comparés. Ils avaient une sensibilité, une spécificité et une courbe opérateur-receveur similaires, et tous avaient un niveau de prédiction des patients sans FA très faible. Une méthode de moyenne glissante a été proposée pour améliorer la discrimination, surtout pour les patients sans FA. Deux modèles ont été retenus, sélectionnés sur les critères de robustesse, de précision, et d’applicabilité. Autour 70% patients sans FA et 75% de patients avec FA ont été correctement identifiés dans la dernière heure avant la FA. Le taux de PAA, la fraction des PAA initiés dans l'oreillette gauche, le pNN50, le temps de conduction auriculo-ventriculaire, et la corrélation entre ce dernier et le rythme cardiaque étaient les variables de prédiction communes à ces deux modèles.
Resumo:
There are many ways to generate geometrical models for numerical simulation, and most of them start with a segmentation step to extract the boundaries of the regions of interest. This paper presents an algorithm to generate a patient-specific three-dimensional geometric model, based on a tetrahedral mesh, without an initial extraction of contours from the volumetric data. Using the information directly available in the data, such as gray levels, we built a metric to drive a mesh adaptation process. The metric is used to specify the size and orientation of the tetrahedral elements everywhere in the mesh. Our method, which produces anisotropic meshes, gives good results with synthetic and real MRI data. The resulting model quality has been evaluated qualitatively and quantitatively by comparing it with an analytical solution and with a segmentation made by an expert. Results show that our method gives, in 90% of the cases, as good or better meshes as a similar isotropic method, based on the accuracy of the volume reconstruction for a given mesh size. Moreover, a comparison of the Hausdorff distances between adapted meshes of both methods and ground-truth volumes shows that our method decreases reconstruction errors faster. Copyright © 2015 John Wiley & Sons, Ltd.
Resumo:
Indian marine engineers are renowned for employment globally due to their knowledge, skill and reliability. This praiseworthy status has been achieved mainly due to the systematic training imparted to marine engineering cadets. However, in an era of advancing technology, marine engineering training has to remain dynamic to imbibe latest technology as well as to meet the demands of the shipping industry. New subjects of studies have to be included in the curriculum in a timely manner taking into consideration the industry requirements and best practices in shipping. Technical competence of marine engineers also has to be subjected to changes depending upon the needs of the ever growing and over regulated shipping industry. Besides. certain soft skills are to be developed and improved amongst the marine engineers in order to alter or amend the personality traits leading to their career success.If timely corrective action is taken. Indian marine engineers can be in still greater demand for employment in global maritime field. In order to enhance the employability of our mmine engineers by improving their quality, a study of marine engineers in general and class IV marine engineers in particular was conducted based on three distinct surveys, viz., survey among senior marine engineers, survey among employers of marine engineers and survey of class IV marine engineers themselves.The surveys have been planned and questionnaires have been designed to focus the study of marine engineer officer class IV from the point of view of the three distinct groups of maritime personnels. As a result of this, the strength and weakness of class IV marine engineers are identified with regard to their performance on board ships, acquisition of necessary technical skills. employability and career success. The criteria of essential qualities of a marine engineer are classified as academic, technical, social, psychological. physical, mental, emergency responsive, communicative and leadership, and have been assessed for a practicing marine engineer by statistical analysis of data collected from surveys. These are assessed for class IV marine engineers from the point of view of senior marine engineers and employers separately. The Endings are delineated and graphically depicted in this thesis.Besides. six pertinent personality traits of a marine engineer viz. self esteem. learning style. decision making. motivation. team work and listening self inventory have been subjected to study and their correlation with career success have been established wherever possible. This is carried out to develop a theoretical framework to understand what leads a marine engineer to his career attainment. This enables the author to estimate the personality strengths and weaknesses of a serving marine engineer and eventually to deduce possible corrective measures or modifications in marine engineering training in India.Maritime training is largely based on International Conventions on Standard of Training. Certification and Watch keeping for Seafarers 1995. its associated Code and Merchant Shipping (STCW for Seafarers) Rules 1998. Further, Maritime Education, Training and Assessment (META) Manual was subjected to a critical scrutiny and relevant Endings of thc surveys arc superimposed on the existing rule requirement and curriculum. Views of senior marine engineers and executives of various shipping companies are taken into account before arriving at the revision of syllabus of marine engineering courses. Modifications in the pattern of workshop and sea service for graduate mechanical engineering trainees are recommended. Desirable age brackets of junior engineers and chief engineers. use of Training and Assessment Record book (TAR Book) during training etc. have also been evaluated.As a result of the pedagogic introspection of the existing system of marine engineering training in India. in this thesis, a revised pattern of workshop training of six months duration for graduate mechanical engineers. revised pattern of sea service training of one year duration and modified now diagram incorporating the above have been arrived at. Effects of various personality traits on career success have been established along with certain findings for improvement of desirable personality traits of marine engineers.
Resumo:
The thesis entitled Personnel Management Practices in the Kerala-Based Scheduled Commercial Banks. Personnel management function is of cardinal importance, requiring a sophisticated and scientific approach. In a labour-intensive, service industry like banking. Productivity and ultimate profitability of the entire organization depend considerably on the effectiveness with which personnel management function is executed; and the prudence with which personnel problems are handle. The main objectives of the study are to understand the current status of personnel management functions in the banks and to evaluate the practices in the light of the principles and theories of personnel management so as to identify the strengths and weaknesses. The universe of this study is the eight Scheduled Commercial Banks based in Kerala. The major limitation of the study is that as State Bank of Travancore, the lone public sector bank based in Kerala did not grant permission for collection of data, this study had to be confined to private sector banks only. Almost the entire data used for this study are primary and were collected from the files and other records or the concerned banks. This report has chapters dealing with the functional areas of personnel management such as determination of human resource requirements, recruitment and selection, training and development, performance appraisal, promotions and compensation. Findings reveal that the practice of personnel management in the Kerala-based private sector scheduled commercial banks has not gained a degree of sophistication compatible with its role in modern business management.
Resumo:
Any automatically measurable, robust and distinctive physical characteristic or personal trait that can be used to identify an individual or verify the claimed identity of an individual, referred to as biometrics, has gained significant interest in the wake of heightened concerns about security and rapid advancements in networking, communication and mobility. Multimodal biometrics is expected to be ultra-secure and reliable, due to the presence of multiple and independent—verification clues. In this study, a multimodal biometric system utilising audio and facial signatures has been implemented and error analysis has been carried out. A total of one thousand face images and 250 sound tracks of 50 users are used for training the proposed system. To account for the attempts of the unregistered signatures data of 25 new users are tested. The short term spectral features were extracted from the sound data and Vector Quantization was done using K-means algorithm. Face images are identified based on Eigen face approach using Principal Component Analysis. The success rate of multimodal system using speech and face is higher when compared to individual unimodal recognition systems
Resumo:
Speech signals are one of the most important means of communication among the human beings. In this paper, a comparative study of two feature extraction techniques are carried out for recognizing speaker independent spoken isolated words. First one is a hybrid approach with Linear Predictive Coding (LPC) and Artificial Neural Networks (ANN) and the second method uses a combination of Wavelet Packet Decomposition (WPD) and Artificial Neural Networks. Voice signals are sampled directly from the microphone and then they are processed using these two techniques for extracting the features. Words from Malayalam, one of the four major Dravidian languages of southern India are chosen for recognition. Training, testing and pattern recognition are performed using Artificial Neural Networks. Back propagation method is used to train the ANN. The proposed method is implemented for 50 speakers uttering 20 isolated words each. Both the methods produce good recognition accuracy. But Wavelet Packet Decomposition is found to be more suitable for recognizing speech because of its multi-resolution characteristics and efficient time frequency localizations
Resumo:
Speech is a natural mode of communication for people and speech recognition is an intensive area of research due to its versatile applications. This paper presents a comparative study of various feature extraction methods based on wavelets for recognizing isolated spoken words. Isolated words from Malayalam, one of the four major Dravidian languages of southern India are chosen for recognition. This work includes two speech recognition methods. First one is a hybrid approach with Discrete Wavelet Transforms and Artificial Neural Networks and the second method uses a combination of Wavelet Packet Decomposition and Artificial Neural Networks. Features are extracted by using Discrete Wavelet Transforms (DWT) and Wavelet Packet Decomposition (WPD). Training, testing and pattern recognition are performed using Artificial Neural Networks (ANN). The proposed method is implemented for 50 speakers uttering 20 isolated words each. The experimental results obtained show the efficiency of these techniques in recognizing speech
Resumo:
The magnetic properties of amorphous Fe–Ni–B based metallic glass nanostructures were investigated. The nanostructures underwent a spin-glass transition at temperatures below 100 K and revealed an irreversible temperature following the linear de Almeida–Thouless dependence. When the nanostructures were cooled below 25 K in a magnetic field, they exhibited an exchange bias effect with enhanced coercivity. The observed onset of exchange bias is associated with the coexistence of the spin-glass phase along with the appearance of another spin-glass phase formed by oxidation of the structurally disordered surface layer, displaying a distinct training effect and cooling field dependence. The latter showed a maximum in exchange bias field and coercivity, which is probably due to competing multiple equivalent spin configurations at the boundary between the two spin-glass phases
Resumo:
This Study overviews the basics of TiO2with respect to its structure, properties and applications. A brief account of its structural, electronic and optical properties is provided. Various emerging technological applications utilising TiO2 is also discussed.Till now, exceptionally large number of fundamental studies and application-oriented research and developments has been carried out by many researchers worldwide in TiO2 with its low-dimensional nanomaterial form due to its various novel properties. These nanostructured materials have shown many favourable properties for potential applications, including pollutant photocatalytic decomposition, photovoltaic cells, sensors and so on. This thesis aims to make an in-depth investigation on different linear and nonlinear optical and structural characteristics of different phases of TiO2. Correspondingly, extensive challenges to synthesise different high quality TiO2 nanostructure derivatives such as nanotubes, nanospheres, nanoflowers etc. are continuing. Here, different nanostructures of anatase TiO2 were synthesised and analysed. Morphologically different nanostructures were found to have different impact on their physical and electronic properties such as varied surface area, dissimilar quantum confinement and hence diverged suitability for different applications. In view of the advantages of TiO2, it can act as an excellent matrix for nanoparticle composite films. These composite films may lead to several advantageous functional optical characteristics. Detailed investigations of these kinds of nanocomposites were also performed, only to find that these nanocomposites showed higher adeptness than their parent material. Fine tuning of these parameters helps researchers to achieve high proficiency in their respective applications. These innumerable opportunities aims to encompass the new progress in studies related to TiO2 for an efficient utilization in photo-catalytic or photo-voltaic applications under visible light, accentuate the future trends of TiO2-research in the environment as well as energy related fields serving promising applications benefitting the mankind. The last section of the thesis discusses the applicability of analysed nanomaterials for dye sensitised solar cells followed by future suggestions.
Resumo:
Background: The most common application of imputation is to infer genotypes of a high-density panel of markers on animals that are genotyped for a low-density panel. However, the increase in accuracy of genomic predictions resulting from an increase in the number of markers tends to reach a plateau beyond a certain density. Another application of imputation is to increase the size of the training set with un-genotyped animals. This strategy can be particularly successful when a set of closely related individuals are genotyped. ----- Methods: Imputation on completely un-genotyped dams was performed using known genotypes from the sire of each dam, one offspring and the offspring’s sire. Two methods were applied based on either allele or haplotype frequencies to infer genotypes at ambiguous loci. Results of these methods and of two available software packages were compared. Quality of imputation under different population structures was assessed. The impact of using imputed dams to enlarge training sets on the accuracy of genomic predictions was evaluated for different populations, heritabilities and sizes of training sets. ----- Results: Imputation accuracy ranged from 0.52 to 0.93 depending on the population structure and the method used. The method that used allele frequencies performed better than the method based on haplotype frequencies. Accuracy of imputation was higher for populations with higher levels of linkage disequilibrium and with larger proportions of markers with more extreme allele frequencies. Inclusion of imputed dams in the training set increased the accuracy of genomic predictions. Gains in accuracy ranged from close to zero to 37.14%, depending on the simulated scenario. Generally, the larger the accuracy already obtained with the genotyped training set, the lower the increase in accuracy achieved by adding imputed dams. ----- Conclusions: Whenever a reference population resembling the family configuration considered here is available, imputation can be used to achieve an extra increase in accuracy of genomic predictions by enlarging the training set with completely un-genotyped dams. This strategy was shown to be particularly useful for populations with lower levels of linkage disequilibrium, for genomic selection on traits with low heritability, and for species or breeds for which the size of the reference population is limited.
Resumo:
A persistent issue of debate in the area of 3D object recognition concerns the nature of the experientially acquired object models in the primate visual system. One prominent proposal in this regard has expounded the use of object centered models, such as representations of the objects' 3D structures in a coordinate frame independent of the viewing parameters [Marr and Nishihara, 1978]. In contrast to this is another proposal which suggests that the viewing parameters encountered during the learning phase might be inextricably linked to subsequent performance on a recognition task [Tarr and Pinker, 1989; Poggio and Edelman, 1990]. The 'object model', according to this idea, is simply a collection of the sample views encountered during training. Given that object centered recognition strategies have the attractive feature of leading to viewpoint independence, they have garnered much of the research effort in the field of computational vision. Furthermore, since human recognition performance seems remarkably robust in the face of imaging variations [Ellis et al., 1989], it has often been implicitly assumed that the visual system employs an object centered strategy. In the present study we examine this assumption more closely. Our experimental results with a class of novel 3D structures strongly suggest the use of a view-based strategy by the human visual system even when it has the opportunity of constructing and using object-centered models. In fact, for our chosen class of objects, the results seem to support a stronger claim: 3D object recognition is 2D view-based.
Resumo:
We present a component-based approach for recognizing objects under large pose changes. From a set of training images of a given object we extract a large number of components which are clustered based on the similarity of their image features and their locations within the object image. The cluster centers build an initial set of component templates from which we select a subset for the final recognizer. In experiments we evaluate different sizes and types of components and three standard techniques for component selection. The component classifiers are finally compared to global classifiers on a database of four objects.
Resumo:
Local descriptors are increasingly used for the task of object recognition because of their perceived robustness with respect to occlusions and to global geometrical deformations. Such a descriptor--based on a set of oriented Gaussian derivative filters-- is used in our recognition system. We report here an evaluation of several techniques for orientation estimation to achieve rotation invariance of the descriptor. We also describe feature selection based on a single training image. Virtual images are generated by rotating and rescaling the image and robust features are selected. The results confirm robust performance in cluttered scenes, in the presence of partial occlusions, and when the object is embedded in different backgrounds.
Resumo:
The genesis of this innovation lies in the commitment of a national Irish business enterprise to the professional development of its staff in general, and to the enhancement of its Information Technologies (IT) staff specifically, in collaboration with a national Higher Education (HE) provider. A postgraduate degree, awarded by the HE provider, seeks to bring coherence and cohesion to the education and training provision for newly recruited IT graduate staff of the business enterprise, simultaneously acting both as an induction process for new staff and as a professional capacity building exercise, thereby enhancing the enterprise’s organisational learning and collective competence in the areas of information technologies, IT security and technical service management. The curriculum was designed by the HE provider in collaboration with the business enterprise to offer it to circa sixteen IT staff per cycle of delivery through a model known generally as the new apprenticeship for professional practice which uses a combination of college-based, block release taught elements, regular day release seminars and substantial work-based learning, supported by the academic staff of the HE provider and work-based support staff/mentors of the business enterprise. Academic quality assurance, pedagogical, assessment and accreditation responsibilities remain with the HE provider. (...)