883 resultados para Radiographic Image Interpretation, Computer-Assisted
Resumo:
Swallowable capsule endoscopy is used for non-invasive diagnosis of some gastrointestinal (GI) organs. However, control over the position of the capsule is a major unresolved issue. This study presents a design for steering the capsule based on magnetic levitation. The levitation is stabilized with the aid of a computer-aided feedback control system and diamagnetism. Peristaltic and gravitational forces to be overcome were calculated. A levitation setup was built to analyze the feasibility of using Hall Effect sensors to locate the in- vivo capsule. CAD software Maxwell 3D (Ansoft, Pittsburgh, PA) was used to determine the dimensions of the resistive electromagnets required for levitation and the feasibility of building them was examined. Comparison based on design complexity was made between positioning the patient supinely and upright.
Resumo:
In this paper a novel method for an application of digital image processing, Edge Detection is developed. The contemporary Fuzzy logic, a key concept of artificial intelligence helps to implement the fuzzy relative pixel value algorithms and helps to find and highlight all the edges associated with an image by checking the relative pixel values and thus provides an algorithm to abridge the concepts of digital image processing and artificial intelligence. Exhaustive scanning of an image using the windowing technique takes place which is subjected to a set of fuzzy conditions for the comparison of pixel values with adjacent pixels to check the pixel magnitude gradient in the window. After the testing of fuzzy conditions the appropriate values are allocated to the pixels in the window under testing to provide an image highlighted with all the associated edges.
Resumo:
Respiratory gating in lung PET imaging to compensate for respiratory motion artifacts is a current research issue with broad potential impact on quantitation, diagnosis and clinical management of lung tumors. However, PET images collected at discrete bins can be significantly affected by noise as there are lower activity counts in each gated bin unless the total PET acquisition time is prolonged, so that gating methods should be combined with imaging-based motion correction and registration methods. The aim of this study was to develop and validate a fast and practical solution to the problem of respiratory motion for the detection and accurate quantitation of lung tumors in PET images. This included: (1) developing a computer-assisted algorithm for PET/CT images that automatically segments lung regions in CT images, identifies and localizes lung tumors of PET images; (2) developing and comparing different registration algorithms which processes all the information within the entire respiratory cycle and integrate all the tumor in different gated bins into a single reference bin. Four registration/integration algorithms: Centroid Based, Intensity Based, Rigid Body and Optical Flow registration were compared as well as two registration schemes: Direct Scheme and Successive Scheme. Validation was demonstrated by conducting experiments with the computerized 4D NCAT phantom and with a dynamic lung-chest phantom imaged using a GE PET/CT System. Iterations were conducted on different size simulated tumors and different noise levels. Static tumors without respiratory motion were used as gold standard; quantitative results were compared with respect to tumor activity concentration, cross-correlation coefficient, relative noise level and computation time. Comparing the results of the tumors before and after correction, the tumor activity values and tumor volumes were closer to the static tumors (gold standard). Higher correlation values and lower noise were also achieved after applying the correction algorithms. With this method the compromise between short PET scan time and reduced image noise can be achieved, while quantification and clinical analysis become fast and precise.
Resumo:
Many students are entering colleges and universities in the United States underprepared in mathematics. National statistics indicate that only approximately one-third of students in developmental mathematics courses pass. When underprepared students repeatedly enroll in courses that do not count toward their degree, it costs them money and delays graduation. This study investigated a possible solution to this problem: Whether using a particular computer assisted learning strategy combined with using mastery learning techniques improved the overall performance of students in a developmental mathematics course. Participants received one of three teaching strategies: (a) group A was taught using traditional instruction with mastery learning supplemented with computer assisted instruction, (b) group B was taught using traditional instruction supplemented with computer assisted instruction in the absence of mastery learning and, (c) group C was taught using traditional instruction without mastery learning or computer assisted instruction. Participants were students in MAT1033, a developmental mathematics course at a large public 4-year college. An analysis of covariance using participants' pretest scores as the covariate tested the null hypothesis that there was no significant difference in the adjusted mean final examination scores among the three groups. Group A participants had significantly higher adjusted mean posttest score than did group C participants. A chi-square test tested the null hypothesis that there were no significant differences in the proportions of students who passed MAT1033 among the treatment groups. It was found that there was a significant difference in the proportion of students who passed among all three groups, with those in group A having the highest pass rate and those in group C the lowest. A discriminant factor analysis revealed that time on task correctly predicted the passing status of 89% of the participants. ^ It was concluded that the most efficacious strategy for teaching developmental mathematics was through the use of mastery learning supplemented by computer-assisted instruction. In addition, it was noted that time on task was a strong predictor of academic success over and above the predictive ability of a measure of previous knowledge of mathematics.^
Resumo:
Respiratory gating in lung PET imaging to compensate for respiratory motion artifacts is a current research issue with broad potential impact on quantitation, diagnosis and clinical management of lung tumors. However, PET images collected at discrete bins can be significantly affected by noise as there are lower activity counts in each gated bin unless the total PET acquisition time is prolonged, so that gating methods should be combined with imaging-based motion correction and registration methods. The aim of this study was to develop and validate a fast and practical solution to the problem of respiratory motion for the detection and accurate quantitation of lung tumors in PET images. This included: (1) developing a computer-assisted algorithm for PET/CT images that automatically segments lung regions in CT images, identifies and localizes lung tumors of PET images; (2) developing and comparing different registration algorithms which processes all the information within the entire respiratory cycle and integrate all the tumor in different gated bins into a single reference bin. Four registration/integration algorithms: Centroid Based, Intensity Based, Rigid Body and Optical Flow registration were compared as well as two registration schemes: Direct Scheme and Successive Scheme. Validation was demonstrated by conducting experiments with the computerized 4D NCAT phantom and with a dynamic lung-chest phantom imaged using a GE PET/CT System. Iterations were conducted on different size simulated tumors and different noise levels. Static tumors without respiratory motion were used as gold standard; quantitative results were compared with respect to tumor activity concentration, cross-correlation coefficient, relative noise level and computation time. Comparing the results of the tumors before and after correction, the tumor activity values and tumor volumes were closer to the static tumors (gold standard). Higher correlation values and lower noise were also achieved after applying the correction algorithms. With this method the compromise between short PET scan time and reduced image noise can be achieved, while quantification and clinical analysis become fast and precise.
Resumo:
Die Nützlichkeit des Einsatzes von Computern in Schule und Ausbildung ist schon seit einigen Jahren unbestritten. Uneinigkeit herrscht gegenwärtig allerdings darüber, welche Aufgaben von Computern eigenständig wahrgenommen werden können. Bewertet man die Übernahme von Lehrfunktionen durch computerbasierte Lehrsysteme, müssen häufig Mängel festgestellt werden. Das Ziel der vorliegenden Arbeit ist es, ausgehend von aktuellen Praxisrealisierungen computerbasierter Lehrsysteme unterschiedliche Klassen von zentralen Lehrkompetenzen (Schülermodellierung, Fachwissen und instruktionale Aktivitäten im engeren Sinne) zu bestimmen. Innerhalb jeder Klasse werden globale Leistungen der Lehrsysteme und notwendige, in komplementärer Relation stehende Tätigkeiten menschlicher Tutoren bestimmt. Das dabei entstandene Klassifikationsschema erlaubt sowohl die Einordnung typischer Lehrsysteme als auch die Feststellung von spezifischen Kompetenzen, die in der Lehrer- bzw. Trainerausbildung zukünftig vermehrt berücksichtigt werden sollten. (DIPF/Orig.)
Resumo:
Introduction Prediction of soft tissue changes following orthognathic surgery has been frequently attempted in the past decades. It has gradually progressed from the classic “cut and paste” of photographs to the computer assisted 2D surgical prediction planning; and finally, comprehensive 3D surgical planning was introduced to help surgeons and patients to decide on the magnitude and direction of surgical movements as well as the type of surgery to be considered for the correction of facial dysmorphology. A wealth of experience was gained and numerous published literature is available which has augmented the knowledge of facial soft tissue behaviour and helped to improve the ability to closely simulate facial changes following orthognathic surgery. This was particularly noticed following the introduction of the three dimensional imaging into the medical research and clinical applications. Several approaches have been considered to mathematically predict soft tissue changes in three dimensions, following orthognathic surgery. The most common are the Finite element model and Mass tensor Model. These were developed into software packages which are currently used in clinical practice. In general, these methods produce an acceptable level of prediction accuracy of soft tissue changes following orthognathic surgery. Studies, however, have shown a limited prediction accuracy at specific regions of the face, in particular the areas around the lips. Aims The aim of this project is to conduct a comprehensive assessment of hard and soft tissue changes following orthognathic surgery and introduce a new method for prediction of facial soft tissue changes. Methodology The study was carried out on the pre- and post-operative CBCT images of 100 patients who received their orthognathic surgery treatment at Glasgow dental hospital and school, Glasgow, UK. Three groups of patients were included in the analysis; patients who underwent Le Fort I maxillary advancement surgery; bilateral sagittal split mandibular advancement surgery or bimaxillary advancement surgery. A generic facial mesh was used to standardise the information obtained from individual patient’s facial image and Principal component analysis (PCA) was applied to interpolate the correlations between the skeletal surgical displacement and the resultant soft tissue changes. The identified relationship between hard tissue and soft tissue was then applied on a new set of preoperative 3D facial images and the predicted results were compared to the actual surgical changes measured from their post-operative 3D facial images. A set of validation studies was conducted. To include: • Comparison between voxel based registration and surface registration to analyse changes following orthognathic surgery. The results showed there was no statistically significant difference between the two methods. Voxel based registration, however, showed more reliability as it preserved the link between the soft tissue and skeletal structures of the face during the image registration process. Accordingly, voxel based registration was the method of choice for superimposition of the pre- and post-operative images. The result of this study was published in a refereed journal. • Direct DICOM slice landmarking; a novel technique to quantify the direction and magnitude of skeletal surgical movements. This method represents a new approach to quantify maxillary and mandibular surgical displacement in three dimensions. The technique includes measuring the distance of corresponding landmarks digitized directly on DICOM image slices in relation to three dimensional reference planes. The accuracy of the measurements was assessed against a set of “gold standard” measurements extracted from simulated model surgery. The results confirmed the accuracy of the method within 0.34mm. Therefore, the method was applied in this study. The results of this validation were published in a peer refereed journal. • The use of a generic mesh to assess soft tissue changes using stereophotogrammetry. The generic facial mesh played a major role in the soft tissue dense correspondence analysis. The conformed generic mesh represented the geometrical information of the individual’s facial mesh on which it was conformed (elastically deformed). Therefore, the accuracy of generic mesh conformation is essential to guarantee an accurate replica of the individual facial characteristics. The results showed an acceptable overall mean error of the conformation of generic mesh 1 mm. The results of this study were accepted for publication in peer refereed scientific journal. Skeletal tissue analysis was performed using the validated “Direct DICOM slices landmarking method” while soft tissue analysis was performed using Dense correspondence analysis. The analysis of soft tissue was novel and produced a comprehensive description of facial changes in response to orthognathic surgery. The results were accepted for publication in a refereed scientific Journal. The main soft tissue changes associated with Le Fort I were advancement at the midface region combined with widening of the paranasal, upper lip and nostrils. Minor changes were noticed at the tip of the nose and oral commissures. The main soft tissue changes associated with mandibular advancement surgery were advancement and downward displacement of the chin and lower lip regions, limited widening of the lower lip and slight reversion of the lower lip vermilion combined with minimal backward displacement of the upper lip were recorded. Minimal changes were observed on the oral commissures. The main soft tissue changes associated with bimaxillary advancement surgery were generalized advancement of the middle and lower thirds of the face combined with widening of the paranasal, upper lip and nostrils regions. In Le Fort I cases, the correlation between the changes of the facial soft tissue and the skeletal surgical movements was assessed using PCA. A statistical method known as ’Leave one out cross validation’ was applied on the 30 cases which had Le Fort I osteotomy surgical procedure to effectively utilize the data for the prediction algorithm. The prediction accuracy of soft tissue changes showed a mean error ranging between (0.0006mm±0.582) at the nose region to (-0.0316mm±2.1996) at the various facial regions.
Resumo:
Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.
Resumo:
Inter-subject parcellation of functional Magnetic Resonance Imaging (fMRI) data based on a standard General Linear Model (GLM) and spectral clustering was recently proposed as a means to alleviate the issues associated with spatial normalization in fMRI. However, for all its appeal, a GLM-based parcellation approach introduces its own biases, in the form of a priori knowledge about the shape of Hemodynamic Response Function (HRF) and task-related signal changes, or about the subject behaviour during the task. In this paper, we introduce a data-driven version of the spectral clustering parcellation, based on Independent Component Analysis (ICA) and Partial Least Squares (PLS) instead of the GLM. First, a number of independent components are automatically selected. Seed voxels are then obtained from the associated ICA maps and we compute the PLS latent variables between the fMRI signal of the seed voxels (which covers regional variations of the HRF) and the principal components of the signal across all voxels. Finally, we parcellate all subjects data with a spectral clustering of the PLS latent variables. We present results of the application of the proposed method on both single-subject and multi-subject fMRI datasets. Preliminary experimental results, evaluated with intra-parcel variance of GLM t-values and PLS derived t-values, indicate that this data-driven approach offers improvement in terms of parcellation accuracy over GLM based techniques.
Resumo:
In this study, we aimed to evaluate the effects of exenatide (EXE) treatment on exocrine pancreas of nonhuman primates. To this end, 52 baboons (Papio hamadryas) underwent partial pancreatectomy, followed by continuous infusion of EXE or saline (SAL) for 14 weeks. Histological analysis, immunohistochemistry, Computer Assisted Stereology Toolbox morphometry, and immunofluorescence staining were performed at baseline and after treatment. The EXE treatment did not induce pancreatitis, parenchymal or periductal inflammatory cell accumulation, ductal hyperplasia, or dysplastic lesions/pancreatic intraepithelial neoplasia. At study end, Ki-67-positive (proliferating) acinar cell number did not change, compared with baseline, in either group. Ki-67-positive ductal cells increased after EXE treatment (P = 0.04). However, the change in Ki-67-positive ductal cell number did not differ significantly between the EXE and SAL groups (P = 0.13). M-30-positive (apoptotic) acinar and ductal cell number did not change after SAL or EXE treatment. No changes in ductal density and volume were observed after EXE or SAL. Interestingly, by triple-immunofluorescence staining, we detected c-kit (a marker of cell transdifferentiation) positive ductal cells co-expressing insulin in ducts only in the EXE group at study end, suggesting that EXE may promote the differentiation of ductal cells toward a β-cell phenotype. In conclusion, 14 weeks of EXE treatment did not exert any negative effect on exocrine pancreas, by inducing either pancreatic inflammation or hyperplasia/dysplasia in nonhuman primates.
Resumo:
OBJETIVOS: Evaluar los factores de riesgo de enfermedades crónicas no transmisibles (ECNT) e identificar las desigualdades sociales relacionadas con su distribución en la población adulta brasileña.MÉTODOS: Se estudiaron los factores de riesgo de ECNT (entre ellos el consumo de tabaco, el sobrepeso y la obesidad, el bajo consumo de frutas y vegetales [BCFV], la insuficiente actividad física en el tiempo de ocio [IAFTO], el estilo de vida sedentario y el consumo excesivo de alcohol) en una muestra probabilística de 54369 adultos de 26 capitales estatales de Brasil y el Distrito Federal en 2006. Se utilizó el Sistema de Vigilancia de los Factores Protectores y de Riesgo para Enfermedades Crónicas No Transmisibles por Entrevistas Telefónicas (VIGITEL), un sistema de encuestas telefónicas asistido por computadora, y se calcularon las prevalencias ajustadas por la edad para las tendencias en cuanto al nivel educacional mediante la regresión de Poisson con modelos lineales. RESULTADOS: Los hombres informaron mayor consumo de tabaco, sobrepeso, BCFV, estilo de vida sedentario y consumo excesivo de alcohol que las mujeres, pero menos IAFTO. En los hombres, la educación se asoció con un mayor sobrepeso y un estilo de vida sedentario, pero con un menor consumo de tabaco, BCFV e IAFTO. En las mujeres, la educación se asoció con un menor consumo de tabaco, sobrepeso, obesidad, BCFV e IAFTO, pero aumentó el estilo de vida sedentario CONCLUSIONES: En Brasil, la prevalencia de factores de riesgo para ECNT (excepto IAFTO) es mayor en los hombres que en las mujeres. En ambos sexos, el nivel de educación influye en la prevalencia de los factores de riesgo para ECNT
Gender identification of five genera of stingless bees (Apidae, Meliponini) based on wing morphology
Resumo:
Currently, the identification of pollinators is a critical necessity of conservation programs. After it was found that features extracted from patterns of wing venation are sufficient to discriminate among insect species, various studies have focused on this structure. We examined wing venation patterns of males and workers of five stingless bee species in order to determine if there are differences between sexes and if these differences are greater within than between species. Geometric morphometric analyses were made of the forewings of males and workers of Nannotrigona testaceicornis, Melipona quadrifasciata, Frieseomelitta varia, and Scaptotrigona aff. depilis and Plebeia remota. The patterns of males and workers from the same species were more similar than the patterns of individuals of the same sex from different species, and the patterns of both males and workers, when analyzed alone, were sufficiently different to distinguish among these five species. This demonstrates that we can use this kind of analysis for the identification of stingless bee species and that the sex of the individual does not impede identification. Computer-assisted morphometric analysis of bee wing images can be a useful tool for biodiversity studies and conservation programs.
Resumo:
Background: Minimally invasive techniques have been revolutionary and provide clinical evidence of decreased morbidity and comparable efficacy to traditional open surgery. Computer-assisted surgical devices have recently been approved for general surgical use. Aim: The aim of this study was to report the first known case of pancreatic resection with the use of a computer-assisted, or robotic, surgical device in Latin America. Patient and Methods: A 37-year-old female with a previous history of radical mastectomy for bilateral breast cancer due to a BRCA2 mutation presented with an acute pancreatitis episode. Radiologic investigation disclosed an intraductal pancreatic neoplasm located in the neck of the pancreas with atrophy of the body and tail. The main pancreatic duct was enlarged. The surgical decision was to perform a laparoscopic subtotal pancreatectomy, using the da Vinci (R) robotic system (Intuitive Surgical, Sunnyvale, CA). Five trocars were used. Pancreatic transection was achieved with vascular endoscopic stapler. The surgical specimen was removed without an additional incision. Results: Operative time was 240 minutes. Blood loss was minimal, and the patient did not receive a transfusion. The recovery was uneventful, and the patient was discharged on postoperative day 4. Conclusions: The subtotal laparoscopic pancreatic resection can safely be performed. The da Vinci robotic system allowed for technical refinements of laparoscopic pancreatic resection. Robotic assistance improved the dissection and control of major blood vessels due to three-dimensional visualization of the operative field and instruments with wrist-type end-effectors.
Resumo:
This paper presents a framework to build medical training applications by using virtual reality and a tool that helps the class instantiation of this framework. The main purpose is to make easier the building of virtual reality applications in the medical training area, considering systems to simulate biopsy exams and make available deformation, collision detection, and stereoscopy functionalities. The instantiation of the classes allows quick implementation of the tools for such a purpose, thus reducing errors and offering low cost due to the use of open source tools. Using the instantiation tool, the process of building applications is fast and easy. Therefore, computer programmers can obtain an initial application and adapt it to their needs. This tool allows the user to include, delete, and edit parameters in the functionalities chosen as well as storing these parameters for future use. In order to verify the efficiency of the framework, some case studies are presented.