982 resultados para Computer techniques
Resumo:
The evolution of computer animation represents one of the most relevant andrevolutionary aspects in the rise of contemporary digital visual culture (Darlew,2000), in particular, phenomena such as cinema “spectacular “ (Ibidem) and videogames. This article analyzes the characteristics of this “culture of simulation” (Turkle, 1995:20) relating the multidisciplinary and spectrum of technical and stylistic choices to the dimension of virtual characters acting. The result of these hybrid mixtures and computerized human motion capture techniques - called virtual cinema, universal capture, motion capture, etc. - cosists mainly on the sophistication of “rotoscoping”, as a new interpretation and appropriation of the captured image. This human motion capture technology, used largely by cinema and digital games, is one of the reasons why the authenticity of the animation is sometimes questioned. It is in the fi eld of 3D computer animation visual that this change is more signifi cant, appearing regularly innovative techniques of image manipulation and “hyper-cinema” (Lamarre, 2006: 31) character’s control with deeper sense of emotions. This shift in the culture that Manovich (2006: 27) calls “photo-GRAPHICS” - and Mulvey (2007) argue that creates a new form of possessive relationship with the viewer, in that it can analyze in detail the image, it can acquire it and modify it - is one of the most important aspects in the rise of Cubbit’s (2007) “cinema of attraction”. This article delves intrinsically into the analyze of virtual character animation — particularly in the fi eld of 3D computer animation and human digital acting.
Resumo:
Forest cover of the Maringá municipality, located in northern Parana State, was mapped in this study. Mapping was carried out by using high-resolution HRC sensor imagery and medium resolution CCD sensor imagery from the CBERS satellite. Images were georeferenced and forest vegetation patches (TOFs - trees outside forests) were classified using two methods of digital classification: reflectance-based or the digital number of each pixel, and object-oriented. The areas of each polygon were calculated, which allowed each polygon to be segregated into size classes. Thematic maps were built from the resulting polygon size classes and summary statistics generated from each size class for each area. It was found that most forest fragments in Maringá were smaller than 500 m². There was also a difference of 58.44% in the amount of vegetation between the high-resolution imagery and medium resolution imagery due to the distinct spatial resolution of the sensors. It was concluded that high-resolution geotechnology is essential to provide reliable information on urban greens and forest cover under highly human-perturbed landscapes.
Resumo:
Graphical user interfaces (GUIs) are critical components of today's open source software. Given their increased relevance, the correctness and usability of GUIs are becoming essential. This paper describes the latest results in the development of our tool to reverse engineer the GUI layer of interactive computing open source systems. We use static analysis techniques to generate models of the user interface behavior from source code. Models help in graphical user interface inspection by allowing designers to concentrate on its more important aspects. One particular type of model that the tool is able to generate is state machines. The paper shows how graph theory can be useful when applied to these models. A number of metrics and algorithms are used in the analysis of aspects of the user interface's quality. The ultimate goal of the tool is to enable analysis of interactive system through GUIs source code inspection.
Resumo:
Nowadays despite improvements in usability and intuitiveness users have to adapt to the proposed systems to satisfy their needs. For instance, they must learn how to achieve tasks, how to interact with the system, and fulfill system's specifications. This paper proposes an approach to improve this situation enabling graphical user interface redefinition through virtualization and computer vision with the aim of increasing the system's usability. To achieve this goal the approach is based on enriched task models, virtualization and picture-driven computing.
Resumo:
Purpose: Precise needle puncture of the renal collecting system is an essential but challenging step for successful percutaneous nephrolithotomy. We evaluated the efficiency of a new real-time electromagnetic tracking system for in vivo kidney puncture. Materials and Methods: Six anesthetized female pigs underwent ureterorenoscopy to place a catheter with an electromagnetic tracking sensor into the desired puncture site and ascertain puncture success. A tracked needle with a similar electromagnetic tracking sensor was subsequently navigated into the sensor in the catheter. Four punctures were performed by each of 2 surgeons in each pig, including 1 each in the kidney, middle ureter, and right and left sides. Outcome measurements were the number of attempts and the time needed to evaluate the virtual trajectory and perform percutaneous puncture. Results: A total of 24 punctures were easily performed without complication. Surgeons required more time to evaluate the trajectory during ureteral than kidney puncture (median 15 seconds, range 14 to 18 vs 13, range 11 to 16, p ¼ 0.1). Median renal and ureteral puncture time was 19 (range 14 to 45) and 51 seconds (range 45 to 67), respectively (p ¼ 0.003). Two attempts were needed to achieve a successful ureteral puncture. The technique requires the presence of a renal stone for testing. Conclusions: The proposed electromagnetic tracking solution for renal collecting system puncture proved to be highly accurate, simple and quick. This method might represent a paradigm shift in percutaneous kidney access techniques.
Resumo:
Pectus excavatum is the most common deformity of the thorax and usually comprises Computed Tomography (CT) examination for pre-operative diagnosis. Aiming at the elimination of the high amounts of CT radiation exposure, this work presents a new methodology for the replacement of CT by a laser scanner (radiation-free) in the treatment of pectus excavatum using personally modeled prosthesis. The complete elimination of CT involves the determination of ribs external outline, at the maximum sternum depression point for prosthesis placement, based on chest wall skin surface information, acquired by a laser scanner. The developed solution resorts to artificial neural networks trained with data vectors from 165 patients. Scaled Conjugate Gradient, Levenberg-Marquardt, Resilient Back propagation and One Step Secant gradient learning algorithms were used. The training procedure was performed using the soft tissue thicknesses, determined using image processing techniques that automatically segment the skin and rib cage. The developed solution was then used to determine the ribs outline in data from 20 patient scanners. Tests revealed that ribs position can be estimated with an average error of about 6.82±5.7 mm for the left and right side of the patient. Such an error range is well below current prosthesis manual modeling (11.7±4.01 mm) even without CT imagiology, indicating a considerable step forward towards CT replacement by a 3D scanner for prosthesis personalization.
Resumo:
Background: Several studies link the seamless fit of implant-supported prosthesis with the accuracy of the dental impression technique obtained during acquisition. In addition, factors such as implant angulation and coping shape contribute to implant misfit. Purpose: The aim of this study was to identify the most accurate impression technique and factors affecting the impression accuracy. Material and Methods: A systematic review of peer-reviewed literature was conducted analyzing articles published between 2009 and 2013. The following search terms were used: implant impression, impression accuracy, and implant misfit.A total of 417 articles were identified; 32 were selected for review. Results: All 32 selected studies refer to in vitro studies. Fourteen articles compare open and closed impression technique, 8 advocate the open technique, and 6 report similar results. Other 14 articles evaluate splinted and non-splinted techniques; all advocating the splinted technique. Polyether material usage was reported in nine; six studies tested vinyl polysiloxane and one study used irreversible hydrocolloid. Eight studies evaluated different copings designs. Intraoral optical devices were compared in four studies. Conclusions: The most accurate results were achieved with two configurations: (1) the optical intraoral system with powder and (2) the open technique with splinted squared transfer copings, using polyether as impression material.
Resumo:
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant’s pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant’s pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant’s main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant’s pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67±34μm and 108μm, and angular misfits of 0.15±0.08º and 1.4º, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants’ pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.
Resumo:
Dental implant recognition in patients without available records is a time-consuming and not straightforward task. The traditional method is a complete user-dependent process, where the expert compares a 2D X-ray image of the dental implant with a generic database. Due to the high number of implants available and the similarity between them, automatic/semi-automatic frameworks to aide implant model detection are essential. In this study, a novel computer-aided framework for dental implant recognition is suggested. The proposed method relies on image processing concepts, namely: (i) a segmentation strategy for semi-automatic implant delineation; and (ii) a machine learning approach for implant model recognition. Although the segmentation technique is the main focus of the current study, preliminary details of the machine learning approach are also reported. Two different scenarios are used to validate the framework: (1) comparison of the semi-automatic contours against implant’s manual contours of 125 X-ray images; and (2) classification of 11 known implants using a large reference database of 601 implants. Regarding experiment 1, 0.97±0.01, 2.24±0.85 pixels and 11.12±6 pixels of dice metric, mean absolute distance and Hausdorff distance were obtained, respectively. In experiment 2, 91% of the implants were successfully recognized while reducing the reference database to 5% of its original size. Overall, the segmentation technique achieved accurate implant contours. Although the preliminary classification results prove the concept of the current work, more features and an extended database should be used in a future work.
Resumo:
Micronuclei (MN) in exfoliated epithelial cells are widely used as biomarkers of cancer risk in humans. MN are classified as biomarkers of the break age and loss of chromosomes. They are small, extra nuclear bodies that arise in dividing cells from centric chromosome/chromatid fragments or whole chromosomes/chromatids that lag behind in anaphase and are not included in the daughter nuclei in telophase. Buccal mucosa cells have been used in biomonitoring exposed populations because these cells are in the direct route of exposure to ingested pollutant, are capable of metabolizing proximate carcinogens to reactive chemicals, and are easily and rapidly collected by brushing the buccal mucosa. The objective of the present study was to further investigate if, and to what extent, different stains have an effect on the results of micronuclei studies in exfoliated cells. These techniques are: Papanicolaou (PAP), Modified Papanicolaou, May-Grünwald Giemsa (MGG), Giemsa, Harris’s Hematoxylin, Feulgen with Fast Green counterstain and Feulgen without counterstain.
Resumo:
Storm- and tsunami-deposits are generated by similar depositional mechanisms making their discrimination hard to establish using classic sedimentologic methods. Here we propose an original approach to identify tsunami-induced deposits by combining numerical simulation and rock magnetism. To test our method, we investigate the tsunami deposit of the Boca do Rio estuary generated by the 1755 earthquake in Lisbon which is well described in the literature. We first test the 1755 tsunami scenario using a numerical inundation model to provide physical parameters for the tsunami wave. Then we use concentration (MS. SIRM) and grain size (chi(ARM), ARM, B1/2, ARM/SIRM) sensitive magnetic proxies coupled with SEM microscopy to unravel the magnetic mineralogy of the tsunami-induced deposit and its associated depositional mechanisms. In order to study the connection between the tsunami deposit and the different sedimentologic units present in the estuary, magnetic data were processed by multivariate statistical analyses. Our numerical simulation show a large inundation of the estuary with flow depths varying from 0.5 to 6 m and run up of similar to 7 m. Magnetic data show a dominance of paramagnetic minerals (quartz) mixed with lesser amount of ferromagnetic minerals, namely titanomagnetite and titanohematite both of a detrital origin and reworked from the underlying units. Multivariate statistical analyses indicate a better connection between the tsunami-induced deposit and a mixture of Units C and D. All these results point to a scenario where the energy released by the tsunami wave was strong enough to overtop and erode important amount of sand from the littoral dune and mixed it with reworked materials from underlying layers at least 1 m in depth. The method tested here represents an original and promising tool to identify tsunami-induced deposits in similar embayed beach environments.