955 resultados para medical student selection


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In research on Silent Speech Interfaces (SSI), different sources of information (modalities) have been combined, aiming at obtaining better performance than the individual modalities. However, when combining these modalities, the dimensionality of the feature space rapidly increases, yielding the well-known "curse of dimensionality". As a consequence, in order to extract useful information from this data, one has to resort to feature selection (FS) techniques to lower the dimensionality of the learning space. In this paper, we assess the impact of FS techniques for silent speech data, in a dataset with 4 non-invasive and promising modalities, namely: video, depth, ultrasonic Doppler sensing, and surface electromyography. We consider two supervised (mutual information and Fisher's ratio) and two unsupervised (meanmedian and arithmetic mean geometric mean) FS filters. The evaluation was made by assessing the classification accuracy (word recognition error) of three well-known classifiers (knearest neighbors, support vector machines, and dynamic time warping). The key results of this study show that both unsupervised and supervised FS techniques improve on the classification accuracy on both individual and combined modalities. For instance, on the video component, we attain relative performance gains of 36.2% in error rates. FS is also useful as pre-processing for feature fusion. Copyright © 2014 ISCA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims of study: 1) Describe the importance of human visual system on lesion detection in medical imaging perception research; 2) Discuss the relevance of research in medical imaging addressing visual function analysis; 3) Identify visual function tests which could be conducted on observers prior to participation in medical imaging perception research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The process of resources systems selection takes an important part in Distributed/Agile/Virtual Enterprises (D/A/V Es) integration. However, the resources systems selection is still a difficult matter to solve in a D/A/VE, as it is pointed out in this paper. Globally, we can say that the selection problem has been equated from different aspects, originating different kinds of models/algorithms to solve it. In order to assist the development of a web prototype tool (broker tool), intelligent and flexible, that integrates all the selection model activities and tools, and with the capacity to adequate to each D/A/V E project or instance (this is the major goal of our final project), we intend in this paper to show: a formulation of a kind of resources selection problem and the limitations of the algorithms proposed to solve it. We formulate a particular case of the problem as an integer programming, which is solved using simplex and branch and bound algorithms, and identify their performance limitations (in terms of processing time) based on simulation results. These limitations depend on the number of processing tasks and on the number of pre-selected resources per processing tasks, defining the domain of applicability of the algorithms for the problem studied. The limitations detected open the necessity of the application of other kind of algorithms (approximate solution algorithms) outside the domain of applicability founded for the algorithms simulated. However, for a broker tool it is very important the knowledge of algorithms limitations, in order to, based on problem features, develop and select the most suitable algorithm that guarantees a good performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In cluster analysis, it can be useful to interpret the partition built from the data in the light of external categorical variables which are not directly involved to cluster the data. An approach is proposed in the model-based clustering context to select a number of clusters which both fits the data well and takes advantage of the potential illustrative ability of the external variables. This approach makes use of the integrated joint likelihood of the data and the partitions at hand, namely the model-based partition and the partitions associated to the external variables. It is noteworthy that each mixture model is fitted by the maximum likelihood methodology to the data, excluding the external variables which are used to select a relevant mixture model only. Numerical experiments illustrate the promising behaviour of the derived criterion. © 2014 Springer-Verlag Berlin Heidelberg.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be cornputationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional. datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10(5) features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work extends a recent comparative study covering four different courses lectured at the Polytechnic of Porto - School of Engineering, in respect to the usage of a particular Learning Management System, i.e. Moodle, and its impact on students' results. A fifth course, which includes a number of resources especially supporting laboratory classes, is now added to the analysis. This particular course includes a number of remote experiments, made available through VISIR (Virtual Instrument Systems in Reality) and directly accessible through links included in the Moodle course page. We have analyzed the students' behavior in following these links and in effectively running experiments in VISIR (and also using other lab related resources, in Moodle). This data have been correlated with students' classifications in the lab component and in the exam, each one weighting 50% of their final marks. We aimed to compare students' performance in a richly Moodle-supported environment (with lab component) and in a poorly Moodle-supported environment (with only theoretical component). This question followed from conclusions drawn in the above referred comparative study, where it was shown that even though a positive correlation factor existed between the number of Moodle accesses and the final exam grade obtained by each student, its explanation behind was not straightforward, as the quality of the resources was preponderant over its quantity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A nationwide seroepidemiologic survey of human T. cruzi infection was carried out in Brazil from 1975 to 1980 as a joint programme of the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and the Superintendência de Campanhas (SUGAM), Ministry of Health, of Brazil. Due to the marked heterogeneity of urban populations as result of wide migratory movements in the country and since triatomine transmission of the disease occurs mostly in rural areas, the survey was limited to rural populations. The survey was based on a large cluster sampling of complete households, from randomly selected localities comprised of 10 to 500 houses, or up to 200 houses in the Amazon region. Random selection of localities and houses was permitted by a detailed mapping of every locality in the country, as performed and continuously adjusted, by SUCAM. In the selected houses duplicate samples on filter paper were collected from every resistent 1 year or older. Samples were tested in one of 14 laboratories scattered in the country by the indirect anti-IgG immunofluorescence test, with reagents produced and standardized by a central laboratory located at the Instituto de Medicina Tropical de São Paulo. A continuous quality control was performed at this laboratory, which tested duplicates of 10% to 15% of all samples examined by the collaborating laboratories. Data regarding number of sera collected, patients'age, sex, place of residence, place of birth and test result were computerized at the Department of Preventive Medicine, Medical School, University of São Paulo, São Paulo, Brazil. Serologic prevalence indices were estimated for each Municipality and mapped according to States and Territories in Brazil. Since data were already available for the State of São Paulo and the Federal District, these unities were not included in the survey.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A vulgarização do uso de dispositivos móveis promoveu a proliferação de aplicações dos mais diversos âmbitos para estes dispositivos não sendo a área clínica uma excepção. Tanto a nível profissional, como a nível de ensino, as tecnologias móveis foram já há muito adoptadas nesta área para as mais diversas finalidades. O trabalho aqui apresentado pretende essencialmente provar a real importância desempenhada pelo mobile learning no contexto da aprendizagem clínica. Mais do que implementar um simples recurso educativo, pretendeu-se conceber um sistema integrado que respondesse a todas as necessidades do aluno quer durante o estudo nas suas diversas fases e locais, como também no próprio serviço hospitalar onde se encontre a desempenhar funções como interno da especialidade. Após uma exaustiva análise das aplicações móveis relevantes da área médica, verificou-se a inexistência de uma ferramenta integradora de vários módulos de aprendizagem com um custo comportável para a maioria dos alunos. Desta forma, idealizou-se uma aplicação capaz de superar esta lacuna que será detalhada ao longo desta tese. Para o desenvolvimento deste trabalho contou-se com a preciosa colaboração dos possíveis utilizadores finais desta ferramenta uma vez que a escolha dos módulos a integrar foi essencialmente baseada nas suas opiniões. Ainda no âmbito desta tese, encontra-se a avaliação do protótipo por parte dos alunos. Esta avaliação pretende validar a efectiva importância de uma ferramenta desta natureza para um aluno de medicina assim como o impacto que o protótipo teve na sua opinião acerca do conceito de mobile learning na aprendizagem clínica. Com vista a uma futura implementação de um recurso educativo deste âmbito, foram também recolhidos os pontos negativos e positivos mais relevantes para o aluno. Em suma, este trabalho valida a importância do papel que as aplicações de aprendizagem para dispositivos móveis podem desempenhar para um aluno de medicina tanto nos seus locais de estudo, como no serviço onde se possa encontrar.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Mecânica

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Arquivos de Medicina 1998; 12(4): 246-248

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background - Medical image perception research relies on visual data to study the diagnostic relationship between observers and medical images. A consistent method to assess visual function for participants in medical imaging research has not been developed and represents a significant gap in existing research. Methods - Three visual assessment factors appropriate to observer studies were identified: visual acuity, contrast sensitivity, and stereopsis. A test was designed for each, and 30 radiography observers (mean age 31.6 years) participated in each test. Results - Mean binocular visual acuity for distance was 20/14 for all observers. The difference between observers who did and did not use corrective lenses was not statistically significant (P = .12). All subjects had a normal value for near visual acuity and stereoacuity. Contrast sensitivity was better than population norms. Conclusion - All observers had normal visual function and could participate in medical imaging visual analysis studies. Protocols of evaluation and populations norms are provided. Further studies are necessary to understand fully the relationship between visual performance on tests and diagnostic accuracy in practice.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador: