883 resultados para Image interpretation, Computer-assisted


Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To investigate protein expression and mutations in phosphatase and tensin homolog (PTEN) in patients with stage IB cervical squamous cell carcinoma (CSCC) and the association with clinical-pathologic features, tumor p53 expression, cell proliferation and angiogenesis.METHODS:Women with stage IB CSCC (n=20 - Study Group) and uterine myoma (n=20 - Control Group), aged 49.1±1.7 years (mean±standard deviation, range 27-78 years), were prospectively evaluated. Patients with cervical cancer were submitted to Piver-Rutledge class III radical hysterectomy and pelvic lymphadenectomy and patients in the Control Group underwent vaginal hysterectomy. Tissue samples from the procedures were stained with hematoxylin and eosin for histological evaluation. Protein expression was detected by immunohistochemistry. Staining for PTEN, p53, Ki-67 and CD31 was evaluated. The intensity of PTEN immunostaining was estimated by computer-assisted image analysis, based on previously reported protocols. Data were analyzed using the Student's t-test to evaluate significant differences between the groups. Level of significance was set at p<0.05.RESULTS:The PTEN expression intensity was lower in the CSCC group than in the Control (benign cervix) samples (150.5±5.2 versus 204.2±2.6; p<0.001). Our study did not identify any mutations after sequencing all nine PTEN exons. PTEN expression was not associated with tumor expression of p53 (p=0.9), CD31 (p=0.8) or Ki-67 (p=0.3) or clinical-pathologic features in patients with invasive carcinoma of the cervix.CONCLUSIONS: Our findings demonstrate that the PTEN protein expression is significantly diminished in CSCC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study is based on the notion that all students are likely to have a computer of some kind as their primary tool at school within a few years. The overall aim is to contribute to the knowledge of what this development of computer-assisted multimodal text production and communication on and over the net may entail in a school context. The study has an abductive approach drawing on theory from Media and Communication studies and from Pedagogy - particularly on media peda-gogy, multimodality, storytelling, conversation research and deliberative democracy – and is based on a DBR project in three schools. The empirical data are retrieved from four school classes, school years 4 and 5, with good access to computers and digital cameras. The classes have used the class blogs to tell the blog visitors about their school work and Skype to communicate with other classes in Sweden and Tanzania. A variety of research methods was employed: content analysis of texts, observations with field notes and camera documentation, interviews with individual students, group interviews with teachers and students, and a small survey. The study is essentially qualitative, focusing on students’ different perceptions. A small quantitative study was conducted to determine if any factors and variables could be linked to each other and to enable comparisons of the surveyed group with other research results. The results suggest that more computers at school offer more opportunities for real-life assignments and the chance to secure an authentic audience to the students’ production; primarily the students’ parents and relatives, students in the same class and at other schools. A theoretical analysis model to determine the degree of reality and authenticity in various school assignments was developed. The results also indicate that having access to cameras for documenting various events in the classes and to an authentic audience can create new opportunities for storytelling that have not been practiced previously at school. The documentary photo invites a viewer into the present tense of the image and the location where the picture was taken, whoever took the picture. It is used by the students and here too, a model has been developed to describe this relationship. The study also focuses on the freedom of expression and democracy. One of the more unexpected findings is that the students in the study did not see that they can influence other people’s perceptions or change various power structures through communication on the web, neither in nor outside of school.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A gravimetric method was evaluated as a simple, sensitive, reproducible, low-cost alternative to quantify the extent of brain infarct after occlusion of the medial cerebral artery in rats. In ether-anesthetized rats, the left medial cerebral artery was occluded for 1, 1.5 or 2 h by inserting a 4-0 nylon monofilament suture into the internal carotid artery. Twenty-four hours later, the brains were processed for histochemical triphenyltetrazolium chloride (TTC) staining and quantitation of the schemic infarct. In each TTC-stained brain section, the ischemic tissue was dissected with a scalpel and fixed in 10% formalin at 0ºC until its total mass could be estimated. The mass (mg) of the ischemic tissue was weighed on an analytical balance and compared to its volume (mm³), estimated either by plethysmometry using platinum electrodes or by computer-assisted image analysis. Infarct size as measured by the weighing method (mg), and reported as a percent (%) of the affected (left) hemisphere, correlated closely with volume (mm³, also reported as %) estimated by computerized image analysis (r = 0.88; P < 0.001; N = 10) or by plethysmography (r = 0.97-0.98; P < 0.0001; N = 41). This degree of correlation was maintained between different experimenters. The method was also sensitive for detecting the effect of different ischemia durations on infarct size (P < 0.005; N = 23), and the effect of drug treatments in reducing the extent of brain damage (P < 0.005; N = 24). The data suggest that, in addition to being simple and low cost, the weighing method is a reliable alternative for quantifying brain infarct in animal models of stroke.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Saimaa ringed seal is one of the most endangered seals in the world. It is a symbol of Lake Saimaa and a lot of effort have been applied to save it. Traditional methods of seal monitoring include capturing the animals and installing sensors on their bodies. These invasive methods for identifying can be painful and affect the behavior of the animals. Automatic identification of seals using computer vision provides a more humane method for the monitoring. This Master's thesis focuses on automatic image-based identification of the Saimaa ringed seals. This consists of detection and segmentation of a seal in an image, analysis of its ring patterns, and identification of the detected seal based on the features of the ring patterns. The proposed algorithm is evaluated with a dataset of 131 individual seals. Based on the experiments with 363 images, 81\% of the images were successfully segmented automatically. Furthermore, a new approach for interactive identification of Saimaa ringed seals is proposed. The results of this research are a starting point for future research in the topic of seal photo-identification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents two studies, both examining the efficacy of a computer programme (Captain's Log) in training attentional skills. The population of interest is the traumatically brain injured. Study #1 is a single-case design that offers recommendations for the second, .larger (N=5) inquiry. Study #2 is an eight-week hierarchical treatment programme with a multi-based testing component. Attention, memory, listening comprehension, locus-of-control, self-esteem, visuo-spatial, and general outcome measures are employed within the testing schedule. Results suggest that any improvement was a result of practice effects. With a few single-case exceptions, the participants showed little improvement in the dependent measures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study had three purposes related to the effective implem,entation and practice of computer-mediated online distance education (C-MODE) at the elementary level: (a) To identify a preliminary framework of criteria 'or guidelines for effective implementation and practice, (b) to identify areas ofC-MODE for which criteria or guidelines of effectiveness have not yet been developed, and (c) to develop an implementation and practice criteria questionnaire based on a review of the distance education literature, and to use the questionnaire in an exploratory survey of elementary C-MODE practitioners. Using the survey instrument, the beliefs and attitudes of 16 elementary C'- MODE practitioners about what constitutes effective implementation and practice principles were investigated. Respondents, who included both administrators and instructors, provided information about themselves and the program in which they worked. They rated 101 individual criteria statenlents on a 5 point Likert scale with a \. point range that included the values: 1 (Strongly Disagree), 2 (Disagree), 3 (Neutral or Undecided), 4 (Agree), 5 (Strongly Agree). Respondents also provided qualitative data by commenting on the individual statements, or suggesting other statements they considered important. Eighty-two different statements or guidelines related to the successful implementation and practice of computer-mediated online education at the elementary level were endorsed. Response to a small number of statements differed significantly by gender and years of experience. A new area for investigation, namely, the role ofparents, which has received little attention in the online distance education literature, emerged from the findings. The study also identified a number of other areas within an elementary context where additional research is necessary. These included: (a) differences in the factors that determine learning in a distance education setting and traditional settings, (b) elementary students' ability to function in an online setting, (c) the role and workload of instructors, (d) the importance of effective, timely communication with students and parents, and (e) the use of a variety of media.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Please consult the paper edition of this thesis to read. It is available on the 5th Floor of the Library at Call Number: Z 9999 E38 D56 1992

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’objectif de notre recherche est l’exploration et l’étude de la question de l’instrumentation informatique des projets de reconstitution archéologiques en architecture monumentale dans le but de proposer de nouveaux moyens. La recherche a pour point de départ une question, à savoir : « Comment, et avec quels moyens informatiques, les projets de reconstitution architecturale pourraient-ils être menés en archéologie? ». Cette question a nécessité, en premier lieu, une étude des différentes approches de restitution qui ont été mises à contribution pour des projets de reconstitution archéologiques, et ceci, à ses différentes phases. Il s’agit de comprendre l’évolution des différentes méthodologies d’approche (épistémologiquement) que les acteurs de ce domaine ont adoptées afin de mettre à contribution les technologies d’information et de communication (TIC) dans le domaine du patrimoine bâti. Cette étude nous a permis de dégager deux principales avenues: une première qui vise exclusivement la « représentation » des résultats des projets et une seconde qui vise la modélisation de ce processus dans le but d’assister l’archéologue dans les différentes phases du projet. Nous démontrons que c’est la deuxième approche qui permet la combinaison et met à la disposition des archéologues une meilleure exploitation des possibilités que l’outil informatique peut et pourra présenter. Cette partie permet de démontrer la nature systémique et complexe de la mise à contribution des TICs dans le domaine de la restitution archéologique. La multitude des acteurs, des conditions techniques, culturelles et autres, des moyens utilisés ainsi que la variété des objectifs envisagés dans les projets de reconstitution archéologiques poussent à explorer une nouvelle approche qui tient compte de cette complexité. Pour atteindre notre objectif de recherche, la poursuite de l’étude de la nature de la démarche archéologique s’impose. Il s’agit de comprendre les liens et les interrelations qui s’établissent entre les différentes unités techniques et intellectuelles en jeu ainsi que les différents modes de réflexions présents dans les projets de reconstitution archéologique du patrimoine bâti. Cette étude met en évidence le rapport direct entre le caractère subjectif de la démarche avec la grande variabilité des approches et des raisonnements mis en œuvre. La recherche est alors exploratoire et propositionnelle pour confronter notamment le caractère systémique et complexe de l’expérience concrète et à travers les publications savantes, les éléments de la réalité connaissable. L’étude des raisonnements archéologiques à travers les publications savantes nous permet de proposer une première typologie de raisonnements étudiés. Chacune de ces typologies reflète une méthodologie d’approche basée sur une organisation d’actions qui peut être consignée dans un ensemble de modules de raisonnements. Cette recherche fait ressortir, des phénomènes et des processus observés, un modèle qui représente les interrelations et les interactions ainsi que les produits spécifiques de ces liaisons complexes. Ce modèle témoigne d’un processus récursif, par essais et erreurs, au cours duquel l’acteur « expérimente » successivement, en fonction des objectifs de l’entreprise et à travers des modules de raisonnements choisis, plusieurs réponses aux questions qui se posent à lui, au titre de la définition du corpus, de la description, de la structuration, de l’interprétation et de la validation des résultats, jusqu’à ce que cette dernière lui paraisse satisfaire aux objectifs de départ. Le modèle établi est validé à travers l’étude de cas du VIIème pylône du temple de Karnak en Égypte. Les résultats obtenus montrent que les modules de raisonnements représentent une solution intéressante pour assister les archéologues dans les projets de reconstitution archéologiques. Ces modules offrent une multiplicité de combinaisons des actions et avantagent ainsi une diversité d’approches et de raisonnements pouvant être mis à contribution pour ces projets tout en maintenant la nature évolutive du système global.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En synthèse d'images réalistes, l'intensité finale d'un pixel est calculée en estimant une intégrale de rendu multi-dimensionnelle. Une large portion de la recherche menée dans ce domaine cherche à trouver de nouvelles techniques afin de réduire le coût de calcul du rendu tout en préservant la fidelité et l'exactitude des images résultantes. En tentant de réduire les coûts de calcul afin d'approcher le rendu en temps réel, certains effets réalistes complexes sont souvent laissés de côté ou remplacés par des astuces ingénieuses mais mathématiquement incorrectes. Afin d'accélerer le rendu, plusieurs avenues de travail ont soit adressé directement le calcul de pixels individuels en améliorant les routines d'intégration numérique sous-jacentes; ou ont cherché à amortir le coût par région d'image en utilisant des méthodes adaptatives basées sur des modèles prédictifs du transport de la lumière. L'objectif de ce mémoire, et de l'article résultant, est de se baser sur une méthode de ce dernier type[Durand2005], et de faire progresser la recherche dans le domaine du rendu réaliste adaptatif rapide utilisant une analyse du transport de la lumière basée sur la théorie de Fourier afin de guider et prioriser le lancer de rayons. Nous proposons une approche d'échantillonnage et de reconstruction adaptative pour le rendu de scènes animées illuminées par cartes d'environnement, permettant la reconstruction d'effets tels que les ombres et les réflexions de tous les niveaux fréquentiels, tout en préservant la cohérence temporelle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Image analysis and graphics synthesis can be achieved with learning techniques using directly image examples without physically-based, 3D models. In our technique: -- the mapping from novel images to a vector of "pose" and "expression" parameters can be learned from a small set of example images using a function approximation technique that we call an analysis network; -- the inverse mapping from input "pose" and "expression" parameters to output images can be synthesized from a small set of example images and used to produce new images using a similar synthesis network. The techniques described here have several applications in computer graphics, special effects, interactive multimedia and very low bandwidth teleconferencing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introducción: En la práctica neuroquirurgica el uso de tornillos pediculares torácicos ha venido en aumento en el tratamiento de diferentes patologías de la espinales. Desde la descripción original, se confirma la adecuada canalización del trayecto mediante el uso del palpador, sin embargo la validez y seguridad de dicho instrumento es limitada y existe riesgo de complicaciones complejas. En este estudio se comprueba la seguridad y validez del uso del palpador para diagnosticar la integridad del trayecto pedicular torácico. Metodología: Se canalizaron pedículos torácicos en especímenes cadavéricos los cuales de manera aleatoria se clasificaron como normales (íntegros) o anormales (violados). Posteriormente cuatro cirujanos de columna, con diferentes grados de experticia, evaluaron el trayecto pedicular. Se realizaron estudios de concordancia obteniendo coeficiente Kappa, porcentaje total de precisión, sensibilidad, especificidad, VPP y VPN y el área bajo la curva ROC para determinar la precisión de la prueba. Resultados: La precisión y validez en el diagnostico del trayecto pedicular y localización del sitio de violación tienen relación directa con la experiencia y entrenamiento del cirujano, el evaluador con mayor experiencia obtuvo los mejores resultados. El uso del palpador tiene una buena precisión, área bajo la curva ROC 0.86, para el diagnostico de las lesiones pediculares. Discusión: La evaluación precisa del trayecto pedicular, presencia o ausencia de una violación, es dependiente del grado de experiencia del cirujano, adicionalmente la precisión diagnostica de la violación varía según la localización de esta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El processament d'imatges mèdiques és una important àrea de recerca. El desenvolupament de noves tècniques que assisteixin i millorin la interpretació visual de les imatges de manera ràpida i precisa és fonamental en entorns clínics reals. La majoria de contribucions d'aquesta tesi són basades en Teoria de la Informació. Aquesta teoria tracta de la transmissió, l'emmagatzemament i el processament d'informació i és usada en camps tals com física, informàtica, matemàtica, estadística, biologia, gràfics per computador, etc. En aquesta tesi, es presenten nombroses eines basades en la Teoria de la Informació que milloren els mètodes existents en l'àrea del processament d'imatges, en particular en els camps del registre i la segmentació d'imatges. Finalment es presenten dues aplicacions especialitzades per l'assessorament mèdic que han estat desenvolupades en el marc d'aquesta tesi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: This paper presents a detailed study of fractal-based methods for texture characterization of mammographic mass lesions and architectural distortion. The purpose of this study is to explore the use of fractal and lacunarity analysis for the characterization and classification of both tumor lesions and normal breast parenchyma in mammography. Materials and methods: We conducted comparative evaluations of five popular fractal dimension estimation methods for the characterization of the texture of mass lesions and architectural distortion. We applied the concept of lacunarity to the description of the spatial distribution of the pixel intensities in mammographic images. These methods were tested with a set of 57 breast masses and 60 normal breast parenchyma (dataset1), and with another set of 19 architectural distortions and 41 normal breast parenchyma (dataset2). Support vector machines (SVM) were used as a pattern classification method for tumor classification. Results: Experimental results showed that the fractal dimension of region of interest (ROIs) depicting mass lesions and architectural distortion was statistically significantly lower than that of normal breast parenchyma for all five methods. Receiver operating characteristic (ROC) analysis showed that fractional Brownian motion (FBM) method generated the highest area under ROC curve (A z = 0.839 for dataset1, 0.828 for dataset2, respectively) among five methods for both datasets. Lacunarity analysis showed that the ROIs depicting mass lesions and architectural distortion had higher lacunarities than those of ROIs depicting normal breast parenchyma. The combination of FBM fractal dimension and lacunarity yielded the highest A z value (0.903 and 0.875, respectively) than those based on single feature alone for both given datasets. The application of the SVM improved the performance of the fractal-based features in differentiating tumor lesions from normal breast parenchyma by generating higher A z value. Conclusion: FBM texture model is the most appropriate model for characterizing mammographic images due to self-affinity assumption of the method being a better approximation. Lacunarity is an effective counterpart measure of the fractal dimension in texture feature extraction in mammographic images. The classification results obtained in this work suggest that the SVM is an effective method with great potential for classification in mammographic image analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The technique of constructing a transformation, or regrading, of a discrete data set such that the histogram of the transformed data matches a given reference histogram is commonly known as histogram modification. The technique is widely used for image enhancement and normalization. A method which has been previously derived for producing such a regrading is shown to be “best” in the sense that it minimizes the error between the cumulative histogram of the transformed data and that of the given reference function, over all single-valued, monotone, discrete transformations of the data. Techniques for smoothed regrading, which provide a means of balancing the error in matching a given reference histogram against the information lost with respect to a linear transformation are also examined. The smoothed regradings are shown to optimize certain cost functionals. Numerical algorithms for generating the smoothed regradings, which are simple and efficient to implement, are described, and practical applications to the processing of LANDSAT image data are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent studies showed that features extracted from brain MRIs can well discriminate Alzheimer’s disease from Mild Cognitive Impairment. This study provides an algorithm that sequentially applies advanced feature selection methods for findings the best subset of features in terms of binary classification accuracy. The classifiers that provided the highest accuracies, have been then used for solving a multi-class problem by the one-versus-one strategy. Although several approaches based on Regions of Interest (ROIs) extraction exist, the prediction power of features has not yet investigated by comparing filter and wrapper techniques. The findings of this work suggest that (i) the IntraCranial Volume (ICV) normalization can lead to overfitting and worst the accuracy prediction of test set and (ii) the combined use of a Random Forest-based filter with a Support Vector Machines-based wrapper, improves accuracy of binary classification.