943 resultados para digital terrain analysis
Resumo:
A set of NIH Image macro programs was developed to make qualitative and quantitative analyses from digital stereo pictures produced by scanning electron microscopes. These tools were designed for image alignment, anaglyph representation, animation, reconstruction of true elevation surfaces, reconstruction of elevation profiles, true-scale elevation mapping and, for the quantitative approach, surface area and roughness calculations. Limitations on time processing, scanning techniques and programming concepts are also discussed.
Resumo:
OBJETIVO: comparar medidas de tamanhos dentários, suas reprodutibilidades e a aplicação da equação de regressão de Tanaka e Johnston na predição do tamanho dos caninos e pré-molares em modelos de gesso e digital. MÉTODOS: trinta modelos de gesso foram escaneados para obtenção dos modelos digitais. As medidas do comprimento mesiodistal dos dentes foram obtidas com paquímetro digital nos modelos de gesso e nos modelos digitais utilizando o software O3d (Widialabs). A somatória do tamanho dos incisivos inferiores foi utilizada para obter os valores de predição do tamanho dos pré-molares e caninos utilizando equação de regressão, e esses valores foram comparados ao tamanho real dos dentes. Os dados foram analisados estatisticamente, aplicando-se aos resultados o teste de correlação de Pearson, a fórmula de Dahlberg, o teste t pareado e a análise de variância (p < 0,05). RESULTADOS: excelente concordância intraexaminador foi observada nas medidas realizadas em ambos os modelos. O erro aleatório não esteve presente nas medidas obtidas com paquímetro, e o erro sistemático foi mais frequente no modelo digital. A previsão de espaço obtida pela aplicação da equação de regressão foi maior que a somatória dos pré-molares e caninos presentes nos modelos de gesso e nos modelos digitais. CONCLUSÃO: apesar da boa reprodutibilidade das medidas realizadas em ambos os modelos, a maioria das medidas dos modelos digitais foram superiores às do modelos de gesso. O espaço previsto foi superestimado em ambos os modelos e significativamente maior nos modelos digitais.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Objectives: The aim of this study was to compare two methods for the evaluation of periapical lesion changes following endodontic therapy (digital subtraction technique and morphometric analysis) by outlining the radiolucent area.Methods: 13 human anterior teeth with pulp necrosis and chronic periapical lesions were used. Periapical radiographs were taken immediately after endodontic therapy (0) and then 2 months, 4 months and 6 months post treatment, using an intraoral radiographic film holder stabilized with impression material. The films were processed in a standard manner and the digitized images were submitted to digital subtraction using Adobe Photoshop 6.0. New bone formation or bone resorption areas were then measured. In the morphometric analysis, the periapical lesions were outlined using VixWin 2000 and the area (in square millimetres) was recorded. The obtained data were submitted to agreement analysis for comparison of the two techniques.Results: There was no correlation between the areas of radiographic changes detected by digital subtraction and periapical lesion outline (r=0.02-0.45). The new bone formation areas observed by digital subtraction presented higher values, with bone changes being especially evident in the 2 month follow-up radiographs, which suggests a higher sensitivity for this method.Conclusions: Both methods are suitable for the evaluation of periapical lesion changes, but the digital subtraction technique is more sensitive for detecting radiographic periapical changes. Dentomaxillofacial Radiology (2009) 38, 438-444. doi: 10.1259/dmfr/53304677
Resumo:
New formularizations, techniques and devices have become the dental whitening most safe and with better results. Although this, the verification of the levels whitening is being continued for visual comparison, that is an empirical, subjective method, subject to errors and dependent of the individual interpretation. Normally the result of the whitening is express for the amplitude of displacement between the initial and the final color, being take like reference the tonalities of a scale of color commanded of darkest for more clearly. Although to be the most used scale, the ordinance of the Vita Classical (R) - Vita, according to recommendations of the manufacturer, reveals inadequate for the evaluation of the whitening. From digital images and of algorithm OER (ordinance of the reference scale), especially developed for the ScanWhite (C), the ordinance of the tonalities of the scale Vita Classical (R) was made. For such, the values of the canals of color R, G, and B of medium part average of the crowns was adopted as reference for evaluation. The images had been taken with the camera Sony Cybershoot DSC F828. The results of the computational ordinance had been compared with the sequence proposal for the manufacturer and with the earned one for the visual evaluation, carried through by 10 volunteers, under standardized conditions of illumination. It statistics analyzes demonstrated significant differences between the ordinances.
Resumo:
This paper reports the novel application of digital curvature as a feature for morphological characterization and classification of landmark shapes. By inheriting several unique features of the continuous curvature, the digital curvature provides invariance to translations, rotations, local shape deformations, and is easily made tolerant to scaling. In addition, the bending energy, a global shape feature, can be directly estimated from the curvature values. The application of these features to analyse patterns of cranial morphological geographic differentiation in the rodent species Thrichomys apereoides has led to encouraging results, indicating a close correspondence between the geographical and morphological distributions. (C) 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
The objective of the present study, developed in a mountainous region in Brazil where many landslides occur, is to present a method for detecting landslide scars that couples image processing techniques with spatial analysis tools. An IKONOS image was initially segmented, and then classified through a Batthacharrya classifier, with an acceptance limit of 99%, resulting in 216 polygons identified with a spectral response similar to landslide scars. After making use of some spatial analysis tools that took into account a susceptibility map, a map of local drainage channels and highways, and the maximum expected size of scars in the study area, some features misinterpreted as scars were excluded. The 43 resulting features were then compared with visually interpreted landslide scars and field observations. The proposed method can be reproduced and enhanced by adding filtering criteria and was able to find new scars on the image, with a final error rate of 2.3%.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Introduction: The aim of this study was to assess the occurrence of apical root transportation after the use of Pro Taper Universal rotary files sizes 3 (F3) and 4 (F4). Methods: Instruments were worked to the apex of the original canal, always by the same operator. Digital subtraction radiography images were produced in buccolingual and mesiodistal projections. A total of 25 radiographs were taken from root canals of human maxillary first molars with curvatures varying from 23-31 degrees. Quantitative data were analyzed by intraclass correlation coefficient and Wilcoxon nonparametric test (P = .05). Results: Buccolingual images revealed a significantly higher degree of apical transportation associated with F4 instruments when compared with F3 instruments in relation to the original canal (Wilcoxon test, P = .007). No significant difference was observed in mesiodistal images (P = .492). Conclusions: F3 instruments should be used with care in curved canals, and F4 instruments should be avoided in apical third preparation of curved canals. (J Endod 2010;36:1052-1055)
Resumo:
Background: In the international scientific literature, there are few studies that emphasize the presence or absence of hair in forensic facial reconstructions. There are neither Brazilian studies concerning digital facial reconstructions without hair, nor research comparing recognition tests between digital facial reconstructions with hair and without hair. The miscegenation of Brazilian people is considerable. Brazilian people, and, in particular, Brazilian women, even if considered as Caucasoid, may present the hair in very different ways: curly, wavy or straight, blonde, red, brown or black, long or short, etc. For this reason, it is difficult to find a correct type of hair for facial reconstruction (unless, in real cases, some hair is recovered with the skeletal remains). Aims and methods: This study focuses on the performance of three different digital forensic facial reconstructions, without hair, of a Brazilian female subject (based on one international database and two Brazilian databases for soft facial-tissue thickness) and evaluates the digital forensic facial reconstructions comparing them to photographs of the target individual and nine other subjects, employing the recognition method. A total of 22 assessors participated in the recognition process; all of them were familiar with the 10 individuals who composed the face pool. Results and conclusions: The target subject was correctly recognized by 41% of the 22 examiners in the International Pattern, by 32% in the Brazilian Magnetic Resonance Pattern and by 32% in the Brazilian Fresh Cadavers Pattern. The facial reconstructions without hair were correctly recognized using the three databases of facial soft-tissue thickness. The observed results were higher than the results obtained using facial reconstructions with hair, from the same skull, which can indicate that it is better to not use hair, at least when there is no information concerning its characteristics. © 2013 Elsevier B.V. All rights reserved.
Resumo:
The article discusses a proposal of displacement measurement using a unique digital camera aiming at to exploit its feasibility for Modal Analysis applications. The proposal discusses a non-contact measuring approach able to measure multiple points simultaneously by using a unique digital camera. A modal analysis of a reduced scale lab building structure based only at the responses of the structure measured with the camera is presented. It focuses at the feasibility of using a simple ordinary camera for performing the output only modal analysis of structures and its advantage. The modal parameters of the structure are estimated from the camera data and also by using ordinary experimental modal analysis based on the Frequency Response Function (FRF) obtained by using the usual sensors like accelerometer and force cell. The comparison of the both analysis showed that the technique is promising noncontact measuring tool relatively simple and effective to be used in structural modal analysis
Resumo:
In this work design criteria for cooling of electronic systems used in a digital transmission equipment are considered. An experimental study using a simulated electronic equipment in which vertically oriented circuit boards are aligned to form vertical channels is carried out. Resistors are used to simulate actual components. The temperature of several components in the printed circuit boards are measured and the influence of the baffles and shields on the cooling effect are discussed. It was observed that the use of the baffles reduce the temperature levels and, the use of shields, although protecting the components from magnetic effects, cause an increase in the temperature levels.
Resumo:
This PhD thesis discusses the impact of Cloud Computing infrastructures on Digital Forensics in the twofold role of target of investigations and as a helping hand to investigators. The Cloud offers a cheap and almost limitless computing power and storage space for data which can be leveraged to commit either new or old crimes and host related traces. Conversely, the Cloud can help forensic examiners to find clues better and earlier than traditional analysis applications, thanks to its dramatically improved evidence processing capabilities. In both cases, a new arsenal of software tools needs to be made available. The development of this novel weaponry and its technical and legal implications from the point of view of repeatability of technical assessments is discussed throughout the following pages and constitutes the unprecedented contribution of this work
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.