12 resultados para 3D sensor
em CiencIPCA - Instituto Politécnico do Cávado e do Ave, Portugal
Resumo:
A interação homem-máquina tem evoluído significativamente nos últimos anos, a ponto de permitir desenvolver soluções adequadas para apoio a pessoas que possuem um certo tipo de limitação física ou cognitiva. O desenvolvimento de técnicas naturais e intuitivas de interação, as chamadas Natural User Interface (NUI), permitem, hoje, que pessoas que estejam acamadas e/ou com incapacidade motora possam executar um conjunto de ações por intermédio de gestos, aumentando assim a sua qualidade de vida. A solução implementada neste projecto é baseada em processamento de imagem e visão por computador através do sensor 3D Kinect e consiste numa interface natural para o desenvolvimento de uma aplicação que reconheça gestos efetuados por uma mão humana. Os gestos identificados pela aplicação acionam um conjunto de ações adequados a uma pessoa acamada, como, por exemplo, acionar a emergência, ligar ou desligar a TV ou controlar a inclinação da cama. O processo de desenvolvimento deste projeto implicou várias etapas. Inicialmente houve um trabalho intenso de investigação sobre as técnicas e tecnologias consideradas importantes para a realização do trabalho - a etapa de investigação, a qual acompanhou praticamente todo o processo. A segunda etapa consistiu na configuração do sistema ao nível do hardware e do software. Após a configuração do sistema, obtiveram-se os primeiros dados do sensor 3D Kinect, os quais foram convertidos num formato mais apropriado ao seu posterior tratamento. A segmentação da mão permitiu posteriormente o reconhecimento de gestos através da técnica de matching para os seis gestos implementados. Os resultados obtidos são satisfatórios, tendo-se contabilizado cerca de 96% de resultados válidos. A área da saúde e bem-estar tem necessidade de aplicações que melhorem a qualidade de vida de pessoas acamadas, nesse sentido, o protótipo desenvolvido faz todo o sentido na sociedade actual, onde se verifica o envelhecimento da população.
Resumo:
Pectus Carinatum (PC) is a chest deformity consisting on the anterior protrusion of the sternum and adjacent costal cartilages. Non-operative corrections, such as the orthotic compression brace, require previous information of the patient chest surface, to improve the overall brace fit. This paper focuses on the validation of the Kinect scanner for the modelling of an orthotic compression brace for the correction of Pectus Carinatum. To this extent, a phantom chest wall surface was acquired using two scanner systems – Kinect and Polhemus FastSCAN – and compared through CT. The results show a RMS error of 3.25mm between the CT data and the surface mesh from the Kinect sensor and 1.5mm from the FastSCAN sensor
Resumo:
Pectus Carinatum (PC) is a chest deformity consisting on the anterior protrusion of the sternum and adjacent costal cartilages. Non-operative corrections, such as the orthotic compression brace, require previous information of the patient chest surface, to improve the overall brace fit. This paper focuses on the validation of the Kinect scanner for the modelling of an orthotic compression brace for the correction of Pectus Carinatum. To this extent, a phantom chest wall surface was acquired using two scanner systems – Kinect and Polhemus FastSCAN – and compared through CT. The results show a RMS error of 3.25mm between the CT data and the surface mesh from the Kinect sensor and 1.5mm from the FastSCAN sensor.
Resumo:
This paper presents Palco, a prototype system specifically designed for the production of 3D cartoon animations. The system addresses the specific problems of producing cartoon animations, where the main obj ective is not to reproduce realistic movements, but rather animate cartoon characters with predefined and characteristic body movements and facial expressions. The techniques employed in Palco are simple and easy to use, not requiring any invasive or complicated motion capture system, as both body motion and facial expression of actors are captured simultaneously, using an infrared motion detection sensor, a regular camera and a pair of electronically instrumented gloves. The animation process is completely actor-driven, with the actor controlling the character movements, gestures, facial expression and voice, all in realtime. The actor controlled cartoonification of the captured facial and body motion is a key functionality of Palco, and one that makes it specifically suited for the production of cartoon animations.
Resumo:
The exponential increase of home-bound persons who live alone and are in need of continuous monitoring requires new solutions to current problems. Most of these cases present illnesses such as motor or psychological disabilities that deprive of a normal living. Common events such as forgetfulness or falls are quite common and have to be prevented or dealt with. This paper introduces a platform to guide and assist these persons (mostly elderly people) by providing multisensory monitoring and intelligent assistance. The platform operates at three levels. The lower level, denominated ‘‘Data acquisition and processing’’performs the usual tasks of a monitoring system, collecting and processing data from the sensors for the purpose of detecting and tracking humans. The aim is to identify their activities in an intermediate level called ‘‘activity detection’’. The upper level, ‘‘Scheduling and decision-making’’, consists of a scheduler which provides warnings, schedules events in an intelligent manner and serves as an interface to the rest of the platform. The idea is to use mobile and static sensors performing constant monitoring of the user and his/her environment, providing a safe environment and an immediate response to severe problems. A case study on elderly fall detection in a nursery home bedroom demonstrates the usefulness of the proposal.
Resumo:
Thermoplastic elastomer/carbon nanotube composites are studied for sensor applications due to their excellent mechanical and electrical properties. Piezoresisitive properties of tri-block copolymer styrene-butadiene-styrene (SBS)/ carbon nanotubes (CNT) prepared by solution casting have been investigated. Young modulus of the SBS/CNT composites increases with the amount of CNT filler content present in the samples, without losing the high strain deformation on the polymer matrix (~1500 %). Further, above the percolation threshold these materials are unique for the development of large deformation sensors due to the strong piezoresistive response. Piezoresistive properties evaluated by uniaxial stretching in tensile mode and 4-point bending showed a Gauge Factors up to 120. The excellent linearity obtained between strain and electrical resistance makes these composites interesting for large strain piezoresistive sensors applications.
Resumo:
In the last years, it has become increasingly clear that neurodegenerative diseases involve protein aggregation, a process often used as disease progression readout and to develop therapeutic strategies. This work presents an image processing tool to automatic segment, classify and quantify these aggregates and the whole 3D body of the nematode Caenorhabditis Elegans. A total of 150 data set images, containing different slices, were captured with a confocal microscope from animals of distinct genetic conditions. Because of the animals’ transparency, most of the slices pixels appeared dark, hampering their body volume direct reconstruction. Therefore, for each data set, all slices were stacked in one single 2D image in order to determine a volume approximation. The gradient of this image was input to an anisotropic diffusion algorithm that uses the Tukey’s biweight as edge-stopping function. The image histogram median of this outcome was used to dynamically determine a thresholding level, which allows the determination of a smoothed exterior contour of the worm and the medial axis of the worm body from thinning its skeleton. Based on this exterior contour diameter and the medial animal axis, random 3D points were then calculated to produce a volume mesh approximation. The protein aggregations were subsequently segmented based on an iso-value and blended with the resulting volume mesh. The results obtained were consistent with qualitative observations in literature, allowing non-biased, reliable and high throughput protein aggregates quantification. This may lead to a significant improvement on neurodegenerative diseases treatment planning and interventions prevention
Resumo:
Image segmentation is an ubiquitous task in medical image analysis, which is required to estimate morphological or functional properties of given anatomical targets. While automatic processing is highly desirable, image segmentation remains to date a supervised process in daily clinical practice. Indeed, challenging data often requires user interaction to capture the required level of anatomical detail. To optimize the analysis of 3D images, the user should be able to efficiently interact with the result of any segmentation algorithm to correct any possible disagreement. Building on a previously developed real-time 3D segmentation algorithm, we propose in the present work an extension towards an interactive application where user information can be used online to steer the segmentation result. This enables a synergistic collaboration between the operator and the underlying segmentation algorithm, thus contributing to higher segmentation accuracy, while keeping total analysis time competitive. To this end, we formalize the user interaction paradigm using a geometrical approach, where the user input is mapped to a non-cartesian space while this information is used to drive the boundary towards the position provided by the user. Additionally, we propose a shape regularization term which improves the interaction with the segmented surface, thereby making the interactive segmentation process less cumbersome. The resulting algorithm offers competitive performance both in terms of segmentation accuracy, as well as in terms of total analysis time. This contributes to a more efficient use of the existing segmentation tools in daily clinical practice. Furthermore, it compares favorably to state-of-the-art interactive segmentation software based on a 3D livewire-based algorithm.
Resumo:
O desenvolvimento de personagens digitais tridimensionais1 na área da animação, a constante procura por soluções tecnológicas convincentes, aliado a uma estética própria, tem contribuído para o sucesso e afirmação da animação tridimensional, na indústria do entretenimento. Contudo, toda a obra que procura ou explora a vertente digital/3D, torna-se ‘vitima’ das limitações do render2 aplicado a uma sequência de imagens, devido ao aumento dos custos financeiros e humanos, assim como da influência e dificuldade implicadas no cumprimento dos objectivos e prazos. O tempo real tem assumido, cada vez mais, um papel predominante na indústria da animação interactiva. Com a evolução da tecnologia surgiu a necessidade de procurar a metodologia apropriada que sirva de alavanca para o desenvolvimento de animações 3D em tempo real, através de softwares open-source ou de baixo orçamento, para a redução de custos, que possibilite simultaneamente descartar qualquer dependência do render na animação 3D. O desenvolvimento de personagens em tempo real, possibilita o surgimento de uma nova abordagem: a interactividade na arte de animar. Esta possibilita a introdução de um vasto leque de novas aplicações e consequentemente, contribui para o aumento do interesse e curiosidade por parte do espectador. No entanto, a inserção, implementação e (ab)uso da tecnologia na área da animação, levanta questões atuais sobre qual o papel do animador. Esta dissertação procura analisar estes aspectos, dando apoio ao projecto de animação 3D em tempo real, denominado ‘PALCO’.
Resumo:
Nowadays, different techniques are available for manufacturing full-arch implant-supported prosthesis, many of them based on an impression procedure. Nevertheless, the long-term success of the prosthesis is highly influenced by the accuracy during such process, being affected by factors such as the impression material, implant position, angulation and depth. This paper investigates the feasibility of a 3D electromagnetic motion tracking system as an acquisition method for modeling such prosthesis. To this extent, we propose an implant acquisition method at the patient mouth, using a specific prototyped tool coupled with a tracker sensor, and a set of calibration procedures (for distortion correction and tool calibration), that ultimately obtains combined measurements of the implant’s position and angulation, and eliminating the use of any impression material. However, in the particular case of the evaluated tracking system, the order of magnitude of the obtained errors invalidates its use for this specific application.
Resumo:
One of the current frontiers in the clinical management of Pectus Excavatum (PE) patients is the prediction of the surgical outcome prior to the intervention. This can be done through computerized simulation of the Nuss procedure, which requires an anatomically correct representation of the costal cartilage. To this end, we take advantage of the costal cartilage tubular structure to detect it through multi-scale vesselness filtering. This information is then used in an interactive 2D initialization procedure which uses anatomical maximum intensity projections of 3D vesselness feature images to efficiently initialize the 3D segmentation process. We identify the cartilage tissue centerlines in these projected 2D images using a livewire approach. We finally refine the 3D cartilage surface through region-based sparse field level-sets. We have tested the proposed algorithm in 6 noncontrast CT datasets from PE patients. A good segmentation performance was found against reference manual contouring, with an average Dice coefficient of 0.75±0.04 and an average mean surface distance of 1.69±0.30mm. The proposed method requires roughly 1 minute for the interactive initialization step, which can positively contribute to an extended use of this tool in clinical practice, since current manual delineation of the costal cartilage can take up to an hour.
Resumo:
Introduction and Objectives. Laparoscopic surgery has undeniable advantages, such as reduced postoperative pain, smaller incisions, and faster recovery. However, to improve surgeons’ performance, ergonomic adaptations of the laparoscopic instruments and introduction of robotic technology are needed. The aim of this study was to ascertain the influence of a new hand-held robotic device for laparoscopy (HHRDL) and 3D vision on laparoscopic skills performance of 2 different groups, naïve and expert. Materials and Methods. Each participant performed 3 laparoscopic tasks—Peg transfer, Wire chaser, Knot—in 4 different ways. With random sequencing we assigned the execution order of the tasks based on the first type of visualization and laparoscopic instrument. Time to complete each laparoscopic task was recorded and analyzed with one-way analysis of variance. Results. Eleven experts and 15 naïve participants were included. Three-dimensional video helps the naïve group to get better performance in Peg transfer, Wire chaser 2 hands, and Knot; the new device improved the execution of all laparoscopic tasks (P < .05). For expert group, the 3D video system benefited them in Peg transfer and Wire chaser 1 hand, and the robotic device in Peg transfer, Wire chaser 1 hand, and Wire chaser 2 hands (P < .05). Conclusion. The HHRDL helps the execution of difficult laparoscopic tasks, such as Knot, in the naïve group. Three-dimensional vision makes the laparoscopic performance of the participants without laparoscopic experience easier, unlike those with experience in laparoscopic procedures.