845 resultados para Medicine Research Statistical methods
Resumo:
The characterization and grading of glioma tumors, via image derived features, for diagnosis, prognosis, and treatment response has been an active research area in medical image computing. This paper presents a novel method for automatic detection and classification of glioma from conventional T2 weighted MR images. Automatic detection of the tumor was established using newly developed method called Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA).Statistical Features were extracted from the detected tumor texture using first order statistics and gray level co-occurrence matrix (GLCM) based second order statistical methods. Statistical significance of the features was determined by t-test and its corresponding p-value. A decision system was developed for the grade detection of glioma using these selected features and its p-value. The detection performance of the decision system was validated using the receiver operating characteristic (ROC) curve. The diagnosis and grading of glioma using this non-invasive method can contribute promising results in medical image computing
Resumo:
This is a collection of PowerPoint and Word documents used to deliver a 10 ECTS module at HE4 level to PhD students in the School of Medicine.
Resumo:
RESUMEN Introducción El papel de las nuevas técnicas ecocardiográficas para el diagnóstico de infarto agudo del miocardio se encuentra en desarrollo y la realización de mecánica ventricular izquierda podría sugerir la presencia de enfermedad coronaria hemodinámicamente significativa. Objetivos Determinar si en pacientes con infarto agudo del miocardio la medición de strain longitudinal global y regional sirve para predecir la presencia de enfermedad coronaria significativa. Métodos Es un estudio de pruebas diagnósticas en el que se evaluaron las características operativas de la mecánica ventricular izquierda para la detección de enfermedad coronaria significativa comparado contra el cateterismo cardiaco, considerado el patrón de oro. Se analizaron 54 pacientes con infarto agudo del miocardio llevados a cateterismo cardiaco, a quienes se les realizó un ecocardiograma transtorácico con medición de strain longitudinal global y regional. Resultados De los 54 pacientes analizados, el 83% tenía enfermedad coronaria significativa. El hallazgo de un strain longitudinal global < -17.5 tuvo una sensibilidad del 85% y una especificidad del 78% para predecir la presencia de enfermedad coronaria; para la arteria descendente anterior un strain longitudinal regional < – 17.4 tuvo una sensibilidad de 82% y una especificidad de 44%, para la arteria circunfleja una sensibilidad del 87% y una especificidad del 37% y para la arteria coronaria derecha una sensibilidad de 73% y una especificidad de 32%. Conclusiones La realización de ecocardiografía con mecánica ventricular en pacientes con infarto agudo del miocardio es útil para predecir la presencia de enfermedad coronaria hemodinámicamente significativa.
Resumo:
Background: Prevalence of individuals with a high cardiovascular risk is elevated in elderly populations. Although metabolic syndrome (MS) increases cardiovascular risk, information is scarce on the prevalence of MS in the elderly. In this study we assessed MS prevalence in a population of elderly Japanese-Brazilians using different MS definitions according to waist circumference cutoff values. Material/Methods: We studied 339 elderly subjects, 44.8% males, aged between 60 to 88 years (70.1 +/- 6.8). MS was defined according to criteria proposed by the Joint Interim Statement in 2009. As waist circumference cutoff point values remain controversial for Asian and Japanese populations, we employed 3 different cutoffs that are commonly used in Japanese epidemiological studies: 1) >90 cm for men and >80 cm for women; 2) >85 cm for men and >90 cm for women; 3) >85 cm for men and >80 cm for women. Results: MS prevalence ranged from 59.9% to 65.8% according to the different definitions. We observed 90% concordance and no statistical difference (p>0.05) in MS prevalence between the 3 definitions. MS diagnosis according to all 3 cutoff values was found in 55.8% of our population, while in only 34.2% was MS discarded by all cutoffs. The prevalence of altered MS components was as follows: arterial blood pressure 82%, fasting glycemia 65.8%, triglyceride 43.4%, and HDL-C levels 36.9%. Conclusions: Elderly Japanese-Brazilians present high metabolic syndrome prevalence independent of waist circumference cutoff values. Concordance between the 3 definitions is high, suggesting that all 3 cutoff values yield similar metabolic syndrome prevalence values in this population.
Resumo:
Amniotic fluid (AF) was described as a potential source of mesenchymal stem cells (MSCs) for biomedicine purposes. Therefore, evaluation of alternative cryoprotectants and freezing protocols capable to maintain the viability and stemness of these cells after cooling is still needed. AF stem cells (AFSCs) were tested for different freezing methods and cryoprotectants. Cell viability, gene expression, surface markers, and plasticity were evaluated after thawing. AFSCs expressed undifferentiated genes Oct4 and Nanog; presented typical markers (CD29, CD44, CD90, and CD105) and were able to differentiate into mesenchymal lineages. All tested cryoprotectants preserved the features of AFSCs however, variations in cell viability were observed. In this concern, dimethyl sulfoxide (Me2SO) showed the best results. The freezing protocols tested did not promote significant changes in the AFSCs viability. Time programmed and nonprogrammed freezing methods could be used for successful AFSCs cryopreservation for 6 months. Although tested cryoprotectants maintained undifferentiated gene expression, typical markers, and plasticity of AFSCs, only Me2SO and glycerol presented workable viability ratios.
Resumo:
Chaabene, H, Hachana, Y, Franchini, E, Mkaouer, B, Montassar, M, and Chamari, K. Reliability and construct validity of the karate-specific aerobic test. J Strength Cond Res 26(12): 3454-3460, 2012-The aim of this study was to examine absolute and relative reliabilities and external responsiveness of the Karate-specific aerobic test (KSAT). This study comprised 43 male karatekas, 19 of them participated in the first study to establish test-retest reliability and 40, selected on the bases of their karate experience and level of practice, participated in the second study to identify external responsiveness of the KSAT. The latter group was divided into 2 categories: national-level group (G(n)) and regional-level group (Gr). Analysis showed excellent test-retest reliability of time to exhaustion (TE), with intraclass correlation coefficient ICC(3,1) >0.90, standard error of measurement (SEM) <5%: (3.2%) and mean difference (bias) +/- the 95% limits of agreement: -9.5 +/- 78.8 seconds. There was a significant difference between test-retest session in peak lactate concentration (Peak [La]) (9.12 +/- 2.59 vs. 8.05 +/- 2.67 mmol.L-1; p < 0.05) but not in peak heart rate (HRpeak) and rating of perceived exertion (RPE) (196 +/- 9 vs. 194 +/- 9 b.min(-1) and 7.6 +/- 0.93 vs. 7.8 +/- 1.15; p > 0.05), respectively. National-level karate athletes (1,032 +/- 101 seconds) were better than regional level (841 +/- 134 seconds) on TE performance during KSAT (p < 0.001). Thus, KSAT provided good external responsiveness. The area under the receiver operator characteristics curve was >0.70 (0.86; confidence interval 95%: 0.72-0.95). Significant difference was detected in Peak [La] between national- (6.09 +/- 1.78 mmol.L-1) and regional-level (8.48 +/- 2.63 mmol.L-1) groups, but not in HRpeak (194 +/- 8 vs. 195 +/- 8 b.min(-1)) and RPE (7.57 +/- 1.15 vs. 7.42 +/- 1.1), respectively. The result of this study indicates that KSAT provides excellent absolute and relative reliabilities. The KSAT can effectively distinguish karate athletes of different competitive levels. Thus, the KSAT may be suitable for field assessment of aerobic fitness of karate practitioners.
Resumo:
This Ph.D. thesis focuses on the investigation of some chemical and sensorial analytical parameters linked to the quality and purity of different categories of oils obtained by olives: extra virgin olive oils, both those that are sold in the large retail trade (supermarkets and discounts) and those directly collected at some Italian mills, and lower-quality oils (refined, lampante and “repaso”). Concurrently with the adoption of traditional and well-known analytical procedures such as gas chromatography and high-performance liquid chromatography, I carried out a set-up of innovative, fast and environmentally-friend methods. For example, I developed some analytical approaches based on Fourier transform medium infrared spectroscopy (FT-MIR) and time domain reflectometry (TDR), coupled with a robust chemometric elaboration of the results. I investigated some other freshness and quality markers that are not included in official parameters (in Italian and European regulations): the adoption of such a full chemical and sensorial analytical plan allowed me to obtain interesting information about the degree of quality of the EVOOs, mostly within the Italian market. Here the range of quality of EVOOs resulted very wide, in terms of sensory attributes, price classes and chemical parameters. Thanks to the collaboration with other Italian and foreign research groups, I carried out several applicative studies, especially focusing on the shelf-life of oils obtained by olives and on the effects of thermal stresses on the quality of the products. I also studied some innovative technological treatments, such as the clarification by using inert gases, as an alternative to the traditional filtration. Moreover, during a three-and-a-half months research stay at the University of Applied Sciences in Zurich, I also carried out a study related to the application of statistical methods for the elaboration of sensory results, obtained thanks to the official Swiss Panel and to some consumer tests.
Resumo:
The objective of this study was to develop a criteria catalogue serving as a guideline for authors to improve quality of reporting experiments in basic research in homeopathy. A Delphi Process was initiated including three rounds of adjusting and phrasing plus two consensus conferences. European researchers who published experimental work within the last 5 years were involved. A checklist for authors provide a catalogue with 23 criteria. The “Introduction” should focus on underlying hypotheses, the homeopathic principle investigated and state if experiments are exploratory or confirmatory. “Materials and methods” should comprise information on object of investigation, experimental setup, parameters, intervention and statistical methods. A more detailed description on the homeopathic substances, for example, manufacture, dilution method, starting point of dilution is required. A further result of the Delphi process is to raise scientists' awareness of reporting blinding, allocation, replication, quality control and system performance controls. The part “Results” should provide the exact number of treated units per setting which were included in each analysis and state missing samples and drop outs. Results presented in tables and figures are as important as appropriate measures of effect size, uncertainty and probability. “Discussion” in a report should depict more than a general interpretation of results in the context of current evidence but also limitations and an appraisal of aptitude for the chosen experimental model. Authors of homeopathic basic research publications are encouraged to apply our checklist when preparing their manuscripts. Feedback is encouraged on applicability, strength and limitations of the list to enable future revisions.
Resumo:
PURPOSE Segmentation of the proximal femur in digital antero-posterior (AP) pelvic radiographs is required to create a three-dimensional model of the hip joint for use in planning and treatment. However, manually extracting the femoral contour is tedious and prone to subjective bias, while automatic segmentation must accommodate poor image quality, anatomical structure overlap, and femur deformity. A new method was developed for femur segmentation in AP pelvic radiographs. METHODS Using manual annotations on 100 AP pelvic radiographs, a statistical shape model (SSM) and a statistical appearance model (SAM) of the femur contour were constructed. The SSM and SAM were used to segment new AP pelvic radiographs with a three-stage approach. At initialization, the mean SSM model is coarsely registered to the femur in the AP radiograph through a scaled rigid registration. Mahalanobis distance defined on the SAM is employed as the search criteria for each annotated suggested landmark location. Dynamic programming was used to eliminate ambiguities. After all landmarks are assigned, a regularized non-rigid registration method deforms the current mean shape of SSM to produce a new segmentation of proximal femur. The second and third stages are iteratively executed to convergence. RESULTS A set of 100 clinical AP pelvic radiographs (not used for training) were evaluated. The mean segmentation error was [Formula: see text], requiring [Formula: see text] s per case when implemented with Matlab. The influence of the initialization on segmentation results was tested by six clinicians, demonstrating no significance difference. CONCLUSIONS A fast, robust and accurate method for femur segmentation in digital AP pelvic radiographs was developed by combining SSM and SAM with dynamic programming. This method can be extended to segmentation of other bony structures such as the pelvis.
Resumo:
Background: Disturbed interpersonal communication is a core problem in schizophrenia. Patients with schizophrenia often appear disconnected and "out of sync" when interacting with others. This may involve perception, cognition, motor behavior, and nonverbal expressiveness. Although well-known from clinical observation, mainstream research has neglected this area. Corresponding theoretical concepts, statistical methods, and assessment were missing. In recent research, however, it has been shown that objective, video-based measures of nonverbal behavior can be used to reliably quantify nonverbal behavior in schizophrenia. Newly developed algorithms allow for a calculation of movement synchrony. We found that the objective amount of movement of patients with schizophrenia during social interactions was closely related to the symptom profiles of these patients (Kupper et al., 2010). In addition and above the mere amount of movement, the degree of synchrony between patients and healthy interactants may be indicative of various problems in the domain of interpersonal communication and social cognition. Methods: Based on our earlier study, head movement synchrony was assessed objectively (using Motion Energy Analysis, MEA) in 378 brief, videotaped role-play scenes involving 27 stabilized outpatients diagnosed with paranoid-type schizophrenia. Results: Lower head movement synchrony was indicative of symptoms (negative symptoms, but also of conceptual disorganization and lack of insight), verbal memory, patients’ self-evaluation of competence, and social functioning. Many of these relationships remained significant even when corrected for the amount of movement of the patients. Conclusion: The results suggest that nonverbal synchrony may be an objective and sensitive indicator of the severity of symptoms, cognition and social functioning.
Resumo:
Pairwise meta-analysis is an established statistical tool for synthesizing evidence from multiple trials, but it is informative only about the relative efficacy of two specific interventions. The usefulness of pairwise meta-analysis is thus limited in real-life medical practice, where many competing interventions may be available for a certain condition and studies informing some of the pairwise comparisons may be lacking. This commonly encountered scenario has led to the development of network meta-analysis (NMA). In the last decade, several applications, methodological developments, and empirical studies in NMA have been published, and the area is thriving as its relevance to public health is increasingly recognized. This article presents a review of the relevant literature on NMA methodology aiming to pinpoint the developments that have appeared in the field. Copyright © 2016 John Wiley & Sons, Ltd.
Resumo:
En esta tesis se aborda la detección y el seguimiento automático de vehículos mediante técnicas de visión artificial con una cámara monocular embarcada. Este problema ha suscitado un gran interés por parte de la industria automovilística y de la comunidad científica ya que supone el primer paso en aras de la ayuda a la conducción, la prevención de accidentes y, en última instancia, la conducción automática. A pesar de que se le ha dedicado mucho esfuerzo en los últimos años, de momento no se ha encontrado ninguna solución completamente satisfactoria y por lo tanto continúa siendo un tema de investigación abierto. Los principales problemas que plantean la detección y seguimiento mediante visión artificial son la gran variabilidad entre vehículos, un fondo que cambia dinámicamente debido al movimiento de la cámara, y la necesidad de operar en tiempo real. En este contexto, esta tesis propone un marco unificado para la detección y seguimiento de vehículos que afronta los problemas descritos mediante un enfoque estadístico. El marco se compone de tres grandes bloques, i.e., generación de hipótesis, verificación de hipótesis, y seguimiento de vehículos, que se llevan a cabo de manera secuencial. No obstante, se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el máximo grado posible de adaptación a cambios en el entorno y de reducir el coste computacional. Para abordar la primera tarea de generación de hipótesis, se proponen dos métodos complementarios basados respectivamente en el análisis de la apariencia y la geometría de la escena. Para ello resulta especialmente interesante el uso de un dominio transformado en el que se elimina la perspectiva de la imagen original, puesto que este dominio permite una búsqueda rápida dentro de la imagen y por tanto una generación eficiente de hipótesis de localización de los vehículos. Los candidatos finales se obtienen por medio de un marco colaborativo entre el dominio original y el dominio transformado. Para la verificación de hipótesis se adopta un método de aprendizaje supervisado. Así, se evalúan algunos de los métodos de extracción de características más populares y se proponen nuevos descriptores con arreglo al conocimiento de la apariencia de los vehículos. Para evaluar la efectividad en la tarea de clasificación de estos descriptores, y dado que no existen bases de datos públicas que se adapten al problema descrito, se ha generado una nueva base de datos sobre la que se han realizado pruebas masivas. Finalmente, se presenta una metodología para la fusión de los diferentes clasificadores y se plantea una discusión sobre las combinaciones que ofrecen los mejores resultados. El núcleo del marco propuesto está constituido por un método Bayesiano de seguimiento basado en filtros de partículas. Se plantean contribuciones en los tres elementos fundamentales de estos filtros: el algoritmo de inferencia, el modelo dinámico y el modelo de observación. En concreto, se propone el uso de un método de muestreo basado en MCMC que evita el elevado coste computacional de los filtros de partículas tradicionales y por consiguiente permite que el modelado conjunto de múltiples vehículos sea computacionalmente viable. Por otra parte, el dominio transformado mencionado anteriormente permite la definición de un modelo dinámico de velocidad constante ya que se preserva el movimiento suave de los vehículos en autopistas. Por último, se propone un modelo de observación que integra diferentes características. En particular, además de la apariencia de los vehículos, el modelo tiene en cuenta también toda la información recibida de los bloques de procesamiento previos. El método propuesto se ejecuta en tiempo real en un ordenador de propósito general y da unos resultados sobresalientes en comparación con los métodos tradicionales. ABSTRACT This thesis addresses on-road vehicle detection and tracking with a monocular vision system. This problem has attracted the attention of the automotive industry and the research community as it is the first step for driver assistance and collision avoidance systems and for eventual autonomous driving. Although many effort has been devoted to address it in recent years, no satisfactory solution has yet been devised and thus it is an active research issue. The main challenges for vision-based vehicle detection and tracking are the high variability among vehicles, the dynamically changing background due to camera motion and the real-time processing requirement. In this thesis, a unified approach using statistical methods is presented for vehicle detection and tracking that tackles these issues. The approach is divided into three primary tasks, i.e., vehicle hypothesis generation, hypothesis verification, and vehicle tracking, which are performed sequentially. Nevertheless, the exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated. Two complementary strategies are proposed to address the first task, i.e., hypothesis generation, based respectively on appearance and geometry analysis. To this end, the use of a rectified domain in which the perspective is removed from the original image is especially interesting, as it allows for fast image scanning and coarse hypothesis generation. The final vehicle candidates are produced using a collaborative framework between the original and the rectified domains. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations. In particular, state-of-the-art methods for feature extraction are evaluated and new descriptors are proposed by exploiting the knowledge on vehicle appearance. Due to the lack of appropriate public databases, a new database is generated and the classification performance of the descriptors is extensively tested on it. Finally, a methodology for the fusion of the different classifiers is presented and the best combinations are discussed. The core of the proposed approach is a Bayesian tracking framework using particle filters. Contributions are made on its three key elements: the inference algorithm, the dynamic model and the observation model. In particular, the use of a Markov chain Monte Carlo method is proposed for sampling, which circumvents the exponential complexity increase of traditional particle filters thus making joint multiple vehicle tracking affordable. On the other hand, the aforementioned rectified domain allows for the definition of a constant-velocity dynamic model since it preserves the smooth motion of vehicles in highways. Finally, a multiple-cue observation model is proposed that not only accounts for vehicle appearance but also integrates the available information from the analysis in the previous blocks. The proposed approach is proven to run near real-time in a general purpose PC and to deliver outstanding results compared to traditional methods.
Resumo:
Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease where the heart muscle is partially thickened and blood flow is - potentially fatally - obstructed. It is one of the leading causes of sudden cardiac death in young people. Electrocardiography (ECG) and Echocardiography (Echo) are the standard tests for identifying HCM and other cardiac abnormalities. The American Heart Association has recommended using a pre-participation questionnaire for young athletes instead of ECG or Echo tests due to considerations of cost and time involved in interpreting the results of these tests by an expert cardiologist. Initially we set out to develop a classifier for automated prediction of young athletes’ heart conditions based on the answers to the questionnaire. Classification results and further in-depth analysis using computational and statistical methods indicated significant shortcomings of the questionnaire in predicting cardiac abnormalities. Automated methods for analyzing ECG signals can help reduce cost and save time in the pre-participation screening process by detecting HCM and other cardiac abnormalities. Therefore, the main goal of this dissertation work is to identify HCM through computational analysis of 12-lead ECG. ECG signals recorded on one or two leads have been analyzed in the past for classifying individual heartbeats into different types of arrhythmia as annotated primarily in the MIT-BIH database. In contrast, we classify complete sequences of 12-lead ECGs to assign patients into two groups: HCM vs. non-HCM. The challenges and issues we address include missing ECG waves in one or more leads and the dimensionality of a large feature-set. We address these by proposing imputation and feature-selection methods. We develop heartbeat-classifiers by employing Random Forests and Support Vector Machines, and propose a method to classify full 12-lead ECGs based on the proportion of heartbeats classified as HCM. The results from our experiments show that the classifiers developed using our methods perform well in identifying HCM. Thus the two contributions of this thesis are the utilization of computational and statistical methods for discovering shortcomings in a current screening procedure and the development of methods to identify HCM through computational analysis of 12-lead ECG signals.
Resumo:
Federal Highway Administration, Office of Safety and Traffic Operations, Washington, D.C.
Resumo:
"TID-4500 ; Biology and Medicine."