913 resultados para termografia, termografia, 3D, reverse, engineering, protesi, transtibiali, texture, mapping
Resumo:
Póster presentado en EDULEARN12, International Conference on Education and New Learning Technologies, Barcelona, 2nd-4th July 2012.
Resumo:
Reverse engineering is the process of discovering the technological principles of a device, object or system through analysis of its structure, function, and operation. From a device used in clinical practice, as the corneal topographer, reverse engineering will be used to infer physical principles and laws. In our case, reverse engineering involves taking this mechanical device apart and analyzing its working detail. The initial knowledge of the application and usefulness of the device provides a motivation that, together with the combination of theory and practice, will help the students to understand and learn concepts studied in different subjects in the Optics and Optometry degree. These subjects belong to both the core and compulsory subjects of the syllabus of first and second year of the degree. Furthermore, the experimental practice is used as transverse axis that relates theoretical concepts, technology transfer and research.
Resumo:
Comunicación presentada en las XVI Jornadas de Ingeniería del Software y Bases de Datos, JISBD 2011, A Coruña, 5-7 septiembre 2011.
Resumo:
In this study we have identified key genes that are critical in development of astrocytic tumors. Meta-analysis of microarray studies which compared normal tissue to astrocytoma revealed a set of 646 differentially expressed genes in the majority of astrocytoma. Reverse engineering of these 646 genes using Bayesian network analysis produced a gene network for each grade of astrocytoma (Grade I–IV), and ‘key genes’ within each grade were identified. Genes found to be most influential to development of the highest grade of astrocytoma, Glioblastoma multiforme were: COL4A1, EGFR, BTF3, MPP2, RAB31, CDK4, CD99, ANXA2, TOP2A, and SERBP1. All of these genes were up-regulated, except MPP2 (down regulated). These 10 genes were able to predict tumor status with 96–100% confidence when using logistic regression, cross validation, and the support vector machine analysis. Markov genes interact with NFkβ, ERK, MAPK, VEGF, growth hormone and collagen to produce a network whose top biological functions are cancer, neurological disease, and cellular movement. Three of the 10 genes - EGFR, COL4A1, and CDK4, in particular, seemed to be potential ‘hubs of activity’. Modified expression of these 10 Markov Blanket genes increases lifetime risk of developing glioblastoma compared to the normal population. The glioblastoma risk estimates were dramatically increased with joint effects of 4 or more than 4 Markov Blanket genes. Joint interaction effects of 4, 5, 6, 7, 8, 9 or 10 Markov Blanket genes produced 9, 13, 20.9, 26.7, 52.8, 53.2, 78.1 or 85.9%, respectively, increase in lifetime risk of developing glioblastoma compared to normal population. In summary, it appears that modified expression of several ‘key genes’ may be required for the development of glioblastoma. Further studies are needed to validate these ‘key genes’ as useful tools for early detection and novel therapeutic options for these tumors.
Resumo:
L’elaborato propone una descrizione generale dei dispositivi protesici transtibiali, dalla storia, al processo di protesizzazione dei pazienti amputati alla struttura costitutiva modulare. Particolare importanza viene, inoltre, conferita all’analisi biomeccanica del movimento, della quale è data descrizione delle tecniche, della strumentazione, e dei risultati che vengono ottenuti nelle indagini di soggetti amputati rispetto alle persone normodotate. Considerazioni sulla biomeccanica di pazienti protesizzati transtibialmente offrono interessanti spunti di riflessione sulle componenti meccaniche, idrauliche ed elettroniche dei dispositivi modulari, delle quali si descrive in modo completo la struttura e il funzionamento, facendo riferimento ai brevetti tecnici più noti. Verranno, infine, riportati risultati di recenti studi e ricerche che evidenziano come l’utilizzo nei moduli prostesici transtibiali di caviglie idrauliche a controllo elettronico dello smorzamento sia la soluzione ottimale per ottenere risultati biomeccanicamente migliori, sia dal punto di vista dell’efficienza del movimento che per quanto riguarda l’equilibrio.
Resumo:
La salvaguardia e conservazione del Patrimonio Artistico ed Architettonico rappresentano un aspetto imprescindibile di ogni cultura, e trovano le loro fondamenta nella coscienza e conoscenza dei Beni. Il rilievo è l’operazione basilare per acquisire una conoscenza rigorosa di un oggetto nella sua geometria e in altre sue caratteristiche. Le finalità delle operazioni di rilevamento sono molteplici, dall’archiviazione a scopo di documentazione fino all’indagine conservativa volta alla diagnostica e alla progettazione di interventi. I modelli digitali, introdotti dallo sviluppo tecnologico degli ultimi decenni, permettono una perfetta conoscenza del bene, non necessitano di contatto diretto durante la fase di rilevamento e possono essere elaborati secondo le esigenze del caso. Le tecniche adottate nel Reverse Engineering si differenziano per il tipo di sensore utilizzato: quelle fotogrammetriche utilizzano sensori di tipo “passivo” e trovano oggi largo impiego nel settore dei Beni Culturali grazie agli strumenti di Structure from Motion, mentre strumenti basati su sensori di tipo “attivo” utilizzano Laser o proiezione di luce strutturata e sono in grado di rilevare con grande precisione geometrie anche molto complesse. La costruzione del modello della fontana del Nettuno e della torre Garisenda di Bologna costituiscono un valido esempio di applicazione delle tecniche di rilievo digitale, e dimostrano la validità delle stesse su oggetti di diversa dimensione in due diversi ambiti applicativi: il restauro e il monitoraggio. Gli sviluppi futuri del Reverse Engineering in questo ambito sono molteplici, e la Geomatica rappresenta senza dubbio una disciplina fondamentale per poterli realizzare.
Resumo:
The validation of Computed Tomography (CT) based 3D models takes an integral part in studies involving 3D models of bones. This is of particular importance when such models are used for Finite Element studies. The validation of 3D models typically involves the generation of a reference model representing the bones outer surface. Several different devices have been utilised for digitising a bone’s outer surface such as mechanical 3D digitising arms, mechanical 3D contact scanners, electro-magnetic tracking devices and 3D laser scanners. However, none of these devices is capable of digitising a bone’s internal surfaces, such as the medullary canal of a long bone. Therefore, this study investigated the use of a 3D contact scanner, in conjunction with a microCT scanner, for generating a reference standard for validating the internal and external surfaces of a CT based 3D model of an ovine femur. One fresh ovine limb was scanned using a clinical CT scanner (Phillips, Brilliance 64) with a pixel size of 0.4 mm2 and slice spacing of 0.5 mm. Then the limb was dissected to obtain the soft tissue free bone while care was taken to protect the bone’s surface. A desktop mechanical 3D contact scanner (Roland DG Corporation, MDX 20, Japan) was used to digitise the surface of the denuded bone. The scanner was used with the resolution of 0.3 × 0.3 × 0.025 mm. The digitised surfaces were reconstructed into a 3D model using reverse engineering techniques in Rapidform (Inus Technology, Korea). After digitisation, the distal and proximal parts of the bone were removed such that the shaft could be scanned with a microCT (µCT40, Scanco Medical, Switzerland) scanner. The shaft, with the bone marrow removed, was immersed in water and scanned with a voxel size of 0.03 mm3. The bone contours were extracted from the image data utilising the Canny edge filter in Matlab (The Mathswork).. The extracted bone contours were reconstructed into 3D models using Amira 5.1 (Visage Imaging, Germany). The 3D models of the bone’s outer surface reconstructed from CT and microCT data were compared against the 3D model generated using the contact scanner. The 3D model of the inner canal reconstructed from the microCT data was compared against the 3D models reconstructed from the clinical CT scanner data. The disparity between the surface geometries of two models was calculated in Rapidform and recorded as average distance with standard deviation. The comparison of the 3D model of the whole bone generated from the clinical CT data with the reference model generated a mean error of 0.19±0.16 mm while the shaft was more accurate(0.08±0.06 mm) than the proximal (0.26±0.18 mm) and distal (0.22±0.16 mm) parts. The comparison between the outer 3D model generated from the microCT data and the contact scanner model generated a mean error of 0.10±0.03 mm indicating that the microCT generated models are sufficiently accurate for validation of 3D models generated from other methods. The comparison of the inner models generated from microCT data with that of clinical CT data generated an error of 0.09±0.07 mm Utilising a mechanical contact scanner in conjunction with a microCT scanner enabled to validate the outer surface of a CT based 3D model of an ovine femur as well as the surface of the model’s medullary canal.
Resumo:
The increasing use of 3D modeling of Human Face in Face Recognition systems, User Interfaces, Graphics, Gaming and the like has made it an area of active study. Majority of the 3D sensors rely on color coded light projection for 3D estimation. Such systems fail to generate any response in regions covered by Facial Hair (like beard, mustache), and hence generate holes in the model which have to be filled manually later on. We propose the use of wavelet transform based analysis to extract the 3D model of Human Faces from a sinusoidal white light fringe projected image. Our method requires only a single image as input. The method is robust to texture variations on the face due to space-frequency localization property of the wavelet transform. It can generate models to pixel level refinement as the phase is estimated for each pixel by a continuous wavelet transform. In cases of sparse Facial Hair, the shape distortions due to hairs can be filtered out, yielding an estimate for the underlying face. We use a low-pass filtering approach to estimate the face texture from the same image. We demonstrate the method on several Human Faces both with and without Facial Hairs. Unseen views of the face are generated by texture mapping on different rotations of the obtained 3D structure. To the best of our knowledge, this is the first attempt to estimate 3D for Human Faces in presence of Facial hair structures like beard and mustache without generating holes in those areas.
Resumo:
An improved technique for 3D head tracking under varying illumination conditions is proposed. The head is modeled as a texture mapped cylinder. Tracking is formulated as an image registration problem in the cylinder's texture map image. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. To solve the registration problem in the presence of lighting variation and head motion, the residual error of registration is modeled as a linear combination of texture warping templates and orthogonal illumination templates. Fast and stable on-line tracking is achieved via regularized, weighted least squares minimization of the registration error. The regularization term tends to limit potential ambiguities that arise in the warping and illumination templates. It enables stable tracking over extended sequences. Tracking does not require a precise initial fit of the model; the system is initialized automatically using a simple 2D face detector. The only assumption is that the target is facing the camera in the first frame of the sequence. The formulation is tailored to take advantage of texture mapping hardware available in many workstations, PC's, and game consoles. The non-optimized implementation runs at about 15 frames per second on a SGI O2 graphic workstation. Extensive experiments evaluating the effectiveness of the formulation are reported. The sensitivity of the technique to illumination, regularization parameters, errors in the initial positioning and internal camera parameters are analyzed. Examples and applications of tracking are reported.
Resumo:
The research aims at developing a framework for semantic-based digital survey of architectural heritage. Rooted in knowledge-based modeling which extracts mathematical constraints of geometry from architectural treatises, as-built information of architecture obtained from image-based modeling is integrated with the ideal model in BIM platform. The knowledge-based modeling transforms the geometry and parametric relation of architectural components from 2D printings to 3D digital models, and create large amount variations based on shape grammar in real time thanks to parametric modeling. It also provides prior knowledge for semantically segmenting unorganized survey data. The emergence of SfM (Structure from Motion) provides access to reconstruct large complex architectural scenes with high flexibility, low cost and full automation, but low reliability of metric accuracy. We solve this problem by combing photogrammetric approaches which consists of camera configuration, image enhancement, and bundle adjustment, etc. Experiments show the accuracy of image-based modeling following our workflow is comparable to that from range-based modeling. We also demonstrate positive results of our optimized approach in digital reconstruction of portico where low-texture-vault and dramatical transition of illumination bring huge difficulties in the workflow without optimization. Once the as-built model is obtained, it is integrated with the ideal model in BIM platform which allows multiple data enrichment. In spite of its promising prospect in AEC industry, BIM is developed with limited consideration of reverse-engineering from survey data. Besides representing the architectural heritage in parallel ways (ideal model and as-built model) and comparing their difference, we concern how to create as-built model in BIM software which is still an open area to be addressed. The research is supposed to be fundamental for research of architectural history, documentation and conservation of architectural heritage, and renovation of existing buildings.
Resumo:
The enormous impact of crystal engineering in modern solid state chemistry takes advantage from the connection between a typical basic science field and the word engineering. Regrettably, the engineering aspect of organic or metal organic crystalline materials are limited, so far, to descriptive structural features, sometime entangled with topological aspects, but only rarely with true material design. This should include not only the fabrication and structural description at micro- and nano-scopic level of the solids, but also a proper reverse engineering, a fundamental discipline for engineers. Translated into scientific language, the reverse crystal engineering refers to a dedicated and accurate analysis of how the building blocks contribute to generate a given material property. This would enable a more appropriate design of new crystalline material. We propose here the application of reverse crystal engineering to optical properties of organic and metal organic framework structures, applying the distributed atomic polarizability approach that we have extensively investigated in the past few years[1,2].
Resumo:
In this work, we propose the use of the neural gas (NG), a neural network that uses an unsupervised Competitive Hebbian Learning (CHL) rule, to develop a reverse engineering process. This is a simple and accurate method to reconstruct objects from point clouds obtained from multiple overlapping views using low-cost sensors. In contrast to other methods that may need several stages that include downsampling, noise filtering and many other tasks, the NG automatically obtains the 3D model of the scanned objects. To demonstrate the validity of our proposal we tested our method with several models and performed a study of the neural network parameterization computing the quality of representation and also comparing results with other neural methods like growing neural gas and Kohonen maps or classical methods like Voxel Grid. We also reconstructed models acquired by low cost sensors that can be used in virtual and augmented reality environments for redesign or manipulation purposes. Since the NG algorithm has a strong computational cost we propose its acceleration. We have redesigned and implemented the NG learning algorithm to fit it onto Graphics Processing Units using CUDA. A speed-up of 180× faster is obtained compared to the sequential CPU version.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08