962 resultados para LEVEL SET


Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE: To determine the prevalence of mesiodens in deciduous and mixed dentitions and its association with other dental anomalies. MATERIAL AND METHODS: Panoramic radiographs of 1,995 orthodontic patients were analyzed retrospectively, obtaining a final sample of 30 patients with mesiodens. The following aspects were analyzed: gender ; number of mesiodens; proportion between erupted and non-erupted mesiodens; initial position of the supernumerary tooth; related complications; treatment plan accomplished; and associated dental anomalies. The frequency of dental anomalies in the sample was compared to reference values for the general population using the chi-square test (c²), with a significance level set at 5%. RESULTS: The prevalence of mesiodens was 1.5% more common among males (1.5:1). Most of the mesiodens were non-erupted (75%) and in a vertical position, facing the oral cavity. Extraction of the mesiodens was the most common treatment. The main complications associated with mesiodens were: delayed eruption of permanent incisors (34.28%) and midline diastema (28.57%). From all the dental anomalies analyzed, only the prevalence of maxillary lateral incisor agenesis was higher in comparison to the general population. CONCLUSION: There was a low prevalence of mesiodens (1.5%) in deciduous and mixed dentition and the condition was not associated with other dental anomalies, except for the maxillary lateral incisor agenesis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction: The biological processes involved in noise-induced hearing loss (NIHL) are still unclear. The involvement of inflammation in this condition has been suggested.Objective: To investigate the association between interleukin - 6 (IL-6) polymorphism and susceptibility to NIHL.Methods: This was a cross-sectional study with a sample of 191 independent elderly individuals aged >60 years of age. Information on exposure to occupational noise was obtained by interviews. Audiological evaluation was performed using pure tone audiometry and genotyped through PCR by restriction fragment length polymorphism - PCR-RFLP. Data were analyzed using the chi-square test and the odds ratio (OR), with the significance level set at 5%.Results: Among elderly with hearing loss (78.0%), 18.8% had a history of exposure to occupational noise. There was a statistically significant association between the genotype frequencies of the IL-6 - 174 and NIHL. The elderly with the CC genotype were less likely to have hearing loss due to occupational noise exposure when compared to those carrying the GG genotype (OR = 0.0124; 95% CI 0.0023-0.0671; p<0.001).Conclusion: This study suggests there is an association of polymorphisms in the IL-6 gene at position - G174C with susceptibility to noise-induced hearing loss. (C) 2014 Associacao Brasileira de Otorrinolaringologia e Cirurgia Cervico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose The aim of the present study was to evaluate the effects of intensity and interval of recovery on performance in the bench press exercise, and the response of salivary lactate and alpha amylase levels. Methods Ten sportsman (aged 29 ± 4 years; body mass index 26 ± 2 kg/cm2 ) were divided in two groups: G70 (performing a bench press exercise at 70 % one repetition maximum—1RM), and G90 (performing a bench press exercise at 90 %—1RM). All groups were engaged in three intervals of recovery (30, 60 and 90 s). The maximum number of repetitions (MNR) and total weight lifted were computed, and saliva samples were collected 15 min before and after different intervals of recovery. For the comparison of the performance and biochemistry parameters, ANOVA tests for repeated measurements were conducted, with a significance level set at 5 %. Results In G70, the 30 s MNR was lower than the 60 and 90 s intervals of recovery (p\0.05) and the MNR with the 60 s interval of recovery was lower than the 90 s interval of recovery (p\0.041). Similarly, in G90 with the 30 s of interval of recovery, the sets were lower than observed with the 60 and 90 s (p\0.05), and MNR with the 60 s interval of recovery was lower than the 90 s interval of recovery (p\0.05). The salivary lactate showed an increase after exercise (p\0.05) when compared with the rest period for all groups, and no effects were observed for salivary alpha amylase. Conclusions Based on this result, the sets and reps can be modified to change the recovery time. This effect is very useful to improve the performance in relationship to different fitness levels.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Substances containing chlorhexidine (CHX) have been studied as intracanal medicaments. The aim of the present study was to characterize the response of mouse subcutaneous connective tissue to CHX-containing medications by conventional optical microscopy. The tissue response was evaluated by implanting polyethylene tubes containing one of the substances evaluated: Calen paste + 0.5% CHX, Calen + 2% CHX, 2% CHX gel, and Calen paste (control). After experimental periods of 7, 21, and 63 days, the implants (n = 10) were removed along with the subcutaneous connective tissue. Tissue samples were subjected to histological processing, and sections were stained with hematoxylin and eosin. Qualitative and quantitative analyses of the number of inflammatory cells, blood vessels, and vascularized areas were performed. Results were analyzed by ANOVA and Tukey tests with the significance level set at 5%. We concluded that Calen + 0.5% CHX led to reparative tissue response in contrast with Calen + 2% CHX and 2% CHX gel, which induced persistent inflammatory response, pointing to the aggressive nature of this mixture. When Calen + 2% CHX and 2% CHX gel were compared, the latter induced more intense inflammatory response. Microsc. Res. Tech., 2012. (C) 2012 Wiley Periodicals, Inc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work, a new enrichment space to accommodate jumps in the pressure field at immersed interfaces in finite element formulations, is proposed. The new enrichment adds two degrees of freedom per element that can be eliminated by means of static condensation. The new space is tested and compared with the classical P1 space and to the space proposed by Ausas et al (Comp. Meth. Appl. Mech. Eng., Vol. 199, 10191031, 2010) in several problems involving jumps in the viscosity and/or the presence of singular forces at interfaces not conforming with the element edges. The combination of this enrichment space with another enrichment that accommodates discontinuities in the pressure gradient has also been explored, exhibiting excellent results in problems involving jumps in the density or the volume forces. Copyright (c) 2011 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective: The aim of this study was to evaluate the degree of conversion and hardness of different composite resins, photo-activated for 40 s with two different light guide tips, fiber optic and polymer. Methods: Five specimens were made for each group evaluated. The percentage of unreacted carbon double bonds (% C=C) was determined from the ratio of absorbance intensities of aliphatic C=C (peak at 1637 cm-1) against internal standard before and after curing of the specimen: aromatic C-C (peak at 1610 cm-1). The Vickers hardness measurements were performed in a universal testing machine. A 50 gf load was used and the indenter with a dwell time of 30 seconds. The degree of conversion and hardness mean values were analyzed separately by ANOVA and Tukey's test, with a significance level set at 5%. Results: The mean values of degree of conversion for the polymer and fiber optic light guide tip were statistically different (P<.001). The hardness mean values were statistically different among the light guide tips (P<.001), but also there was difference between top and bottom surfaces (P<.001). Conclusions: The results showed that the resins photo-activated with the fiber optic light guide tip promoted higher values for degree of conversion and hardness.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Myocardial perfusion quantification by means of Contrast-Enhanced Cardiac Magnetic Resonance images relies on time consuming frame-by-frame manual tracing of regions of interest. In this Thesis, a novel automated technique for myocardial segmentation and non-rigid registration as a basis for perfusion quantification is presented. The proposed technique is based on three steps: reference frame selection, myocardial segmentation and non-rigid registration. In the first step, the reference frame in which both endo- and epicardial segmentation will be performed is chosen. Endocardial segmentation is achieved by means of a statistical region-based level-set technique followed by a curvature-based regularization motion. Epicardial segmentation is achieved by means of an edge-based level-set technique followed again by a regularization motion. To take into account the changes in position, size and shape of myocardium throughout the sequence due to out of plane respiratory motion, a non-rigid registration algorithm is required. The proposed non-rigid registration scheme consists in a novel multiscale extension of the normalized cross-correlation algorithm in combination with level-set methods. The myocardium is then divided into standard segments. Contrast enhancement curves are computed measuring the mean pixel intensity of each segment over time, and perfusion indices are extracted from each curve. The overall approach has been tested on synthetic and real datasets. For validation purposes, the sequences have been manually traced by an experienced interpreter, and contrast enhancement curves as well as perfusion indices have been computed. Comparisons between automatically extracted and manually obtained contours and enhancement curves showed high inter-technique agreement. Comparisons of perfusion indices computed using both approaches against quantitative coronary angiography and visual interpretation demonstrated that the two technique have similar diagnostic accuracy. In conclusion, the proposed technique allows fast, automated and accurate measurement of intra-myocardial contrast dynamics, and may thus address the strong clinical need for quantitative evaluation of myocardial perfusion.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Il lavoro di tesi si è svolto in collaborazione con il laboratorio di elettrofisiologia, Unità Operativa di Cardiologia, Dipartimento Cardiovascolare, dell’ospedale “S. Maria delle Croci” di Ravenna, Azienda Unità Sanitaria Locale della Romagna, ed ha come obiettivo lo sviluppo di un metodo per l’individuazione dell’atrio sinistro in sequenze di immagini ecografiche intracardiache acquisite durante procedure di ablazione cardiaca transcatetere per il trattamento della fibrillazione atriale. La localizzazione della parete posteriore dell'atrio sinistro in immagini ecocardiografiche intracardiache risulta fondamentale qualora si voglia monitorare la posizione dell'esofago rispetto alla parete stessa per ridurre il rischio di formazione della fistola atrio esofagea. Le immagini derivanti da ecografia intracardiaca sono state acquisite durante la procedura di ablazione cardiaca ed esportate direttamente dall’ecografo in formato Audio Video Interleave (AVI). L’estrazione dei singoli frames è stata eseguita implementando un apposito programma in Matlab, ottenendo così il set di dati su cui implementare il metodo di individuazione della parete atriale. A causa dell’eccessivo rumore presente in alcuni set di dati all’interno della camera atriale, sono stati sviluppati due differenti metodi per il tracciamento automatico del contorno della parete dell’atrio sinistro. Il primo, utilizzato per le immagini più “pulite”, si basa sull’utilizzo del modello Chan-Vese, un metodo di segmentazione level-set region-based, mentre il secondo, efficace in presenza di rumore, sfrutta il metodo di clustering K-means. Entrambi i metodi prevedono l’individuazione automatica dell’atrio, senza che il clinico fornisca informazioni in merito alla posizione dello stesso, e l’utilizzo di operatori morfologici per l’eliminazione di regioni spurie. I risultati così ottenuti sono stati valutati qualitativamente, sovrapponendo il contorno individuato all'immagine ecografica e valutando la bontà del tracciamento. Inoltre per due set di dati, segmentati con i due diversi metodi, è stata eseguita una valutazione quantitativa confrontatoli con il risultato del tracciamento manuale eseguito dal clinico.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

L’imaging ad ultrasuoni è una tecnica di indagine utilizzata comunemente per molte applicazioni diagnostiche e terapeutiche. La tecnica ha numerosi vantaggi: non è invasiva, fornisce immagini in tempo reale e l’equipaggiamento necessario è facilmente trasportabile. Le immagini ottenute con questa tecnica hanno tuttavia basso rapporto segnale rumore a causa del basso contrasto e del rumore caratteristico delle immagini ad ultrasuoni, detto speckle noise. Una corretta segmentazione delle strutture anatomiche nelle immagini ad ultrasuoni è di fondamentale importanza in molte applicazioni mediche . Nella pratica clinica l’identificazione delle strutture anatomiche è in molti casi ancora ottenuta tramite tracciamento manuale dei contorni. Questo processo richiede molto tempo e produce risultati scarsamente riproducibili e legati all’esperienza del clinico che effettua l’operazione. In ambito cardiaco l’indagine ecocardiografica è alla base dello studio della morfologia e della funzione del miocardio. I sistemi ecocardiografici in grado di acquisire in tempo reale un dato volumetrico, da pochi anni disponibili per le applicazioni cliniche, hanno dimostrato la loro superiorità rispetto all’ecocardiografia bidimensionale e vengono considerati dalla comunità medica e scientifica, la tecnica di acquisizione che nel futuro prossimo sostituirà la risonanza magnetica cardiaca. Al fine di sfruttare appieno l’informazione volumetrica contenuta in questi dati, negli ultimi anni sono stati sviluppati numerosi metodi di segmentazione automatici o semiautomatici tesi alla valutazione della volumetria del ventricolo sinistro. La presente tesi descrive il progetto, lo sviluppo e la validazione di un metodo di segmentazione ventricolare quasi automatico 3D, ottenuto integrando la teoria dei modelli level-set e la teoria del segnale monogenico. Questo approccio permette di superare i limiti dovuti alla scarsa qualità delle immagini grazie alla sostituzione dell’informazione di intensità con l’informazione di fase, che contiene tutta l’informazione strutturale del segnale.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Vertebroplasty is a minimally invasive procedure with many benefits; however, the procedure is not without risks and potential complications, of which leakage of the cement out of the vertebral body and into the surrounding tissues is one of the most serious. Cement can leak into the spinal canal, venous system, soft tissues, lungs and intradiscal space, causing serious neurological complications, tissue necrosis or pulmonary embolism. We present a method for automatic segmentation and tracking of bone cement during vertebroplasty procedures, as a first step towards developing a warning system to avoid cement leakage outside the vertebral body. We show that by using active contours based on level sets the shape of the injected cement can be accurately detected. The model has been improved for segmentation as proposed in our previous work by including a term that restricts the level set function to the vertebral body. The method has been applied to a set of real intra-operative X-ray images and the results show that the algorithm can successfully detect different shapes with blurred and not well-defined boundaries, where the classical active contours segmentation is not applicable. The method has been positively evaluated by physicians.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Rising seawater temperature and CO2 concentrations (ocean acidification) represent two of the most influential factors impacting marine ecosystems in the face of global climate change. In ecological climate change research full-factorial experiments across seasons in multi-species, cross-trophic level set-ups are essential as they allow making realistic estimations about direct and indirect effects and the relative importance of both major environmental stressors on ecosystems. In benthic mesocosm experiments we tested the responses of coastal Baltic Sea Fucus vesiculosus communities to elevated seawater temperature and CO2 concentrations across four seasons of one year. While increasing [CO2] levels only had minor effects, warming had strong and persistent effects on grazers which affected the Fucus community differently depending on season. In late summer a temperature-driven collapse of grazers caused a cascading effect from the consumers to the foundation species resulting in overgrowth of Fucus thalli by epiphytes. In fall/ winter, outside the growing season of epiphytes, intensified grazing under warming resulted in a significant reduction of Fucus biomass. Thus, we confirm the prediction that future increasing water temperatures influence marine food-web processes by altering top-down control, but we also show that specific consequences for food-web structure depend on season. Since Fucus vesiculosus is the dominant habitat-forming brown algal system in the Baltic Sea, its potential decline under global warming implicates the loss of key functions and services such as provision of nutrient storage, substrate, food, shelter and nursery grounds for a diverse community of marine invertebrates and fish in Baltic Sea coastal waters.