979 resultados para Visione Robotica Calibrazione Camera Robot Hand Eye
Resumo:
Diabetes is a common condition affecting around 69,000 people in Northern Ireland. One of the possible complications of diabetes is a condition called diabetic retinopathy, which can cause sight loss and blindness. Retinopathy causes damage to the tiny blood vessels (capillaries) that nourish the retina, the tissues in the back of the eye that deal with light. This can seriously affect vision.Research shows that if retinopathy is identified early, for example through retinal screening, and treated appropriately, blindness can be prevented in the majority of people with diabetes, both type 1 and type 2.Screening programmeIn Northern Ireland, a diabetic retinopathy screening programme (DRSP), run by the Public Health Agency, has been put in place to screen all eligible people with diabetes aged 12 years and over. Dr Bernadette Cullen, Consultant in Public Health Medicine, PHA, said: "Screening detects problems early and allows appropriate treatment to be offered. It is vital that everyone with diabetes attends diabetic retinopathy screening when it is offered. Early detection of potential problems offers a very real opportunity to intervene and, with appropriate treatment, can prevent blindness in the majority of those at risk."The screening testThe screening test involves photographs being taken of the back of each eye, using a special camera. The test is painless and takes about 15 minutes. If the person is over 50 years of age, they will need to have drops put in their eyes about 15 minutes before the test to dilate their pupils.The photographs are sent to the regional screening centre for analysis by trained graders. Results will show whether patients require further referral for assessment or treatment by hospital eye services (HES). If this is not required, screening will be offered again the following year.GPs are informed of all results and if the patient is under the care of a diabetologist, they too will be informed. Patients are informed of results by their GP and if they need an urgent referral, protocols are in place to ensure this happens.Many people with diabetes attend their optometrist (optician) on a regular basis to have a sight test for glasses. It is important they continue to do this - this test is free to people with diabetes. It is also vital that people with diabetes attend for diabetic retinopathy screening when invited, regardless of how or where their diabetes is treated, or whether they visit a hospital consultant/GP for their diabetic care.Patients are invited to screening via their GP practice. An information leaflet to help patients make an informed decision to attend for screening is also sent. This can be accessed via the PHA website: www.publichealth.hscni.net.
Resumo:
The estimation of camera egomotion is a well established problem in computer vision. Many approaches have been proposed based on both the discrete and the differential epipolar constraint. The discrete case is mainly used in self-calibrated stereoscopic systems, whereas the differential case deals with a unique moving camera. The article surveys several methods for mobile robot egomotion estimation covering more than 0.5 million samples using synthetic data. Results from real data are also given
Resumo:
This paper presents a vision-based localization approach for an underwater robot in a structured environment. The system is based on a coded pattern placed on the bottom of a water tank and an onboard down looking camera. Main features are, absolute and map-based localization, landmark detection and tracking, and real-time computation (12.5 Hz). The proposed system provides three-dimensional position and orientation of the vehicle along with its velocity. Accuracy of the drift-free estimates is very high, allowing them to be used as feedback measures of a velocity-based low-level controller. The paper details the localization algorithm, by showing some graphical results, and the accuracy of the system
Resumo:
This research work deals with the problem of modeling and design of low level speed controller for the mobile robot PRIM. The main objective is to develop an effective educational tool. On one hand, the interests in using the open mobile platform PRIM consist in integrating several highly related subjects to the automatic control theory in an educational context, by embracing the subjects of communications, signal processing, sensor fusion and hardware design, amongst others. On the other hand, the idea is to implement useful navigation strategies such that the robot can be served as a mobile multimedia information point. It is in this context, when navigation strategies are oriented to goal achievement, that a local model predictive control is attained. Hence, such studies are presented as a very interesting control strategy in order to develop the future capabilities of the system
Resumo:
This paper is focused on the robot mobile platform PRIM (platform robot information multimedia). This robot has been made in order to cover two main needs of our group, on one hand the need for a full open mobile robotic platform that is very useful in fulfilling the teaching and research activity of our school community, and on the other hand with the idea of introducing an ethical product which would be useful as mobile multimedia information point as a service tool. This paper introduces exactly how the system is made up and explains just what the philosophy is behind this work. The navigation strategies and sensor fusion, where machine vision system is the most important one, are oriented towards goal achievement and are the key to the behaviour of the robot
Resumo:
In the future, robots will enter our everyday lives to help us with various tasks.For a complete integration and cooperation with humans, these robots needto be able to acquire new skills. Sensor capabilities for navigation in real humanenvironments and intelligent interaction with humans are some of the keychallenges.Learning by demonstration systems focus on the problem of human robotinteraction, and let the human teach the robot by demonstrating the task usinghis own hands. In this thesis, we present a solution to a subproblem within thelearning by demonstration field, namely human-robot grasp mapping. Robotgrasping of objects in a home or office environment is challenging problem.Programming by demonstration systems, can give important skills for aidingthe robot in the grasping task.The thesis presents two techniques for human-robot grasp mapping, directrobot imitation from human demonstrator and intelligent grasp imitation. Inintelligent grasp mapping, the robot takes the size and shape of the object intoconsideration, while for direct mapping, only the pose of the human hand isavailable.These are evaluated in a simulated environment on several robot platforms.The results show that knowing the object shape and size for a grasping taskimproves the robot precision and performance
Resumo:
A 41-year-old male presented with severe frostbite that was monitored clinically and with a new laser Doppler imaging (LDI) camera that records arbitrary microcirculatory perfusion units (1-256 arbitrary perfusion units (APU's)). LDI monitoring detected perfusion differences in hand and foot not seen visually. On day 4-5 after injury, LDI showed that while fingers did not experience any significant perfusion change (average of 31±25 APUs on day 5), the patient's left big toe did (from 17±29 APUs day 4 to 103±55 APUs day 5). These changes in regional perfusion were not detectable by visual examination. On day 53 postinjury, all fingers with reduced perfusion by LDI were amputated, while the toe could be salvaged. This case clearly demonstrates that insufficient microcirculatory perfusion can be identified using LDI in ways which visual examination alone does not permit, allowing prognosis of clinical outcomes. Such information may also be used to develop improved treatment approaches.
Resumo:
Computed Tomography (CT) represents the standard imaging modality for tumor volume delineation for radiotherapy treatment planning of retinoblastoma despite some inherent limitations. CT scan is very useful in providing information on physical density for dose calculation and morphological volumetric information but presents a low sensitivity in assessing the tumor viability. On the other hand, 3D ultrasound (US) allows a highly accurate definition of the tumor volume thanks to its high spatial resolution but it is not currently integrated in the treatment planning but used only for diagnosis and follow-up. Our ultimate goal is an automatic segmentation of gross tumor volume (GTV) in the 3D US, the segmentation of the organs at risk (OAR) in the CT and the registration of both modalities. In this paper, we present some preliminary results in this direction. We present 3D active contour-based segmentation of the eye ball and the lens in CT images; the presented approach incorporates the prior knowledge of the anatomy by using a 3D geometrical eye model. The automated segmentation results are validated by comparing with manual segmentations. Then, we present two approaches for the fusion of 3D CT and US images: (i) landmark-based transformation, and (ii) object-based transformation that makes use of eye ball contour information on CT and US images.
Resumo:
For radiotherapy treatment planning of retinoblastoma inchildhood, Computed Tomography (CT) represents thestandard method for tumor volume delineation, despitesome inherent limitations. CT scan is very useful inproviding information on physical density for dosecalculation and morphological volumetric information butpresents a low sensitivity in assessing the tumorviability. On the other hand, 3D ultrasound (US) allows ahigh accurate definition of the tumor volume thanks toits high spatial resolution but it is not currentlyintegrated in the treatment planning but used only fordiagnosis and follow-up. Our ultimate goal is anautomatic segmentation of gross tumor volume (GTV) in the3D US, the segmentation of the organs at risk (OAR) inthe CT and the registration of both. In this paper, wepresent some preliminary results in this direction. Wepresent 3D active contour-based segmentation of the eyeball and the lens in CT images; the presented approachincorporates the prior knowledge of the anatomy by usinga 3D geometrical eye model. The automated segmentationresults are validated by comparing with manualsegmentations. Then, for the fusion of 3D CT and USimages, we present two approaches: (i) landmark-basedtransformation, and (ii) object-based transformation thatmakes use of eye ball contour information on CT and USimages.
Resumo:
This research work deals with the problem of modeling and design of low level speed controller for the mobile robot PRIM. The main objective is to develop an effective educational, and research tool. On one hand, the interests in using the open mobile platform PRIM consist in integrating several highly related subjects to the automatic control theory in an educational context, by embracing the subjects of communications, signal processing, sensor fusion and hardware design, amongst others. On the other hand, the idea is to implement useful navigation strategies such that the robot can be served as a mobile multimedia information point. It is in this context, when navigation strategies are oriented to goal achievement, that a local model predictive control is attained. Hence, such studies are presented as a very interesting control strategy in order to develop the future capabilities of the system. In this context the research developed includes the visual information as a meaningful source that allows detecting the obstacle position coordinates as well as planning the free obstacle trajectory that should be reached by the robot
Resumo:
Image filtering is a highly demanded approach of image enhancement in digital imaging systems design. It is widely used in television and camera design technologies to improve the quality of an output image to avoid various problems such as image blurring problem thatgains importance in design of displays of large sizes and design of digital cameras. This thesis proposes a new image filtering method basedon visual characteristics of human eye such as MTF. In contrast to the traditional filtering methods based on human visual characteristics this thesis takes into account the anisotropy of the human eye vision. The proposed method is based on laboratory measurements of the human eye MTF and takes into account degradation of the image by the latter. This method improves an image in the way it will be degraded by human eye MTF to give perception of the original image quality. This thesis gives a basic understanding of an image filtering approach and the concept of MTF and describes an algorithm to perform an image enhancement based on MTF of human eye. Performed experiments have shown quite good results according to human evaluation. Suggestions to improve the algorithm are also given for the future improvements.
Resumo:
Tässä työssä raportoidaan harjoitustyön kehittäminen ja toteuttaminen Aktiivisen- ja robottinäön kurssille. Harjoitustyössä suunnitellaan ja toteutetaan järjestelmä joka liikuttaa kappaleita robottikäsivarrella kolmiuloitteisessa avaruudessa. Kappaleidenpaikkojen määrittämiseen järjestelmä käyttää digitaalisia kuvia. Tässä työssä esiteltävässä harjoitustyötoteutuksessa käytettiin raja-arvoistusta HSV-väriavaruudessa kappaleiden segmentointiin kuvasta niiden värien perusteella. Segmentoinnin tuloksena saatavaa binäärikuvaa suodatettiin mediaanisuotimella kuvan häiriöiden poistamiseksi. Kappaleen paikkabinäärikuvassa määritettiin nimeämällä yhtenäisiä pikseliryhmiä yhtenäisen alueen nimeämismenetelmällä. Kappaleen paikaksi määritettiin suurimman nimetyn pikseliryhmän paikka. Kappaleiden paikat kuvassa yhdistettiin kolmiuloitteisiin koordinaatteihin kalibroidun kameran avulla. Järjestelmä liikutti kappaleita niiden arvioitujen kolmiuloitteisten paikkojen perusteella.
Resumo:
Sensor-based robot control allows manipulation in dynamic environments with uncertainties. Vision is a versatile low-cost sensory modality, but low sample rate, high sensor delay and uncertain measurements limit its usability, especially in strongly dynamic environments. Force is a complementary sensory modality allowing accurate measurements of local object shape when a tooltip is in contact with the object. In multimodal sensor fusion, several sensors measuring different modalities are combined to give a more accurate estimate of the environment. As force and vision are fundamentally different sensory modalities not sharing a common representation, combining the information from these sensors is not straightforward. In this thesis, methods for fusing proprioception, force and vision together are proposed. Making assumptions of object shape and modeling the uncertainties of the sensors, the measurements can be fused together in an extended Kalman filter. The fusion of force and visual measurements makes it possible to estimate the pose of a moving target with an end-effector mounted moving camera at high rate and accuracy. The proposed approach takes the latency of the vision system into account explicitly, to provide high sample rate estimates. The estimates also allow a smooth transition from vision-based motion control to force control. The velocity of the end-effector can be controlled by estimating the distance to the target by vision and determining the velocity profile giving rapid approach and minimal force overshoot. Experiments with a 5-degree-of-freedom parallel hydraulic manipulator and a 6-degree-of-freedom serial manipulator show that integration of several sensor modalities can increase the accuracy of the measurements significantly.
Resumo:
This dissertation examined skill development in music reading by focusing on the visual processing of music notation in different music-reading tasks. Each of the three experiments of this dissertation addressed one of the three types of music reading: (i) sight-reading, i.e. reading and performing completely unknown music, (ii) rehearsed reading, during which the performer is already familiar with the music being played, and (iii) silent reading with no performance requirements. The use of the eye-tracking methodology allowed the recording of the readers’ eye movements from the time of music reading with extreme precision. Due to the lack of coherence in the smallish amount of prior studies on eye movements in music reading, the dissertation also had a heavy methodological emphasis. The present dissertation thus aimed to promote two major issues: (1) it investigated the eye-movement indicators of skill and skill development in sight-reading, rehearsed reading and silent reading, and (2) developed and tested suitable methods that can be used by future studies on the topic. Experiment I focused on the eye-movement behaviour of adults during their first steps of learning to read music notation. The longitudinal experiment spanned a nine-month long music-training period, during which 49 participants (university students taking part in a compulsory music course) sight-read and performed a series of simple melodies in three measurement sessions. Participants with no musical background were entitled as “novices”, whereas “amateurs” had had musical training prior to the experiment. The main issue of interest was the changes in the novices’ eye movements and performances across the measurements while the amateurs offered a point of reference for the assessment of the novices’ development. The experiment showed that the novices tended to sight-read in a more stepwise fashion than the amateurs, the latter group manifesting more back-and-forth eye movements. The novices’ skill development was reflected by the faster identification of note symbols involved in larger melodic intervals. Across the measurements, the novices also began to show sensitivity to the melodies’ metrical structure, which the amateurs demonstrated from the very beginning. The stimulus melodies consisted of quarter notes, making the effects of meter and larger melodic intervals distinguishable from effects caused by, say, different rhythmic patterns. Experiment II explored the eye movements of 40 experienced musicians (music education students and music performance students) during temporally controlled rehearsed reading. This cross-sectional experiment focused on the eye-movement effects of one-bar-long melodic alterations placed within a familiar melody. The synchronizing of the performance and eye-movement recordings enabled the investigation of the eye-hand span, i.e., the temporal gap between a performed note and the point of gaze. The eye-hand span was typically found to remain around one second. Music performance students demonstrated increased professing efficiency by their shorter average fixation durations as well as in the two examined eye-hand span measures: these participants used larger eye-hand spans more frequently and inspected more of the musical score during the performance of one metrical beat than students of music education. Although all participants produced performances almost indistinguishable in terms of their auditory characteristics, the altered bars indeed affected the reading of the score: the general effects of expertise in terms of the two eye- hand span measures, demonstrated by the music performance students, disappeared in the face of the melodic alterations. Experiment III was a longitudinal experiment designed to examine the differences between adult novice and amateur musicians’ silent reading of music notation, as well as the changes the 49 participants manifested during a nine-month long music course. From a methodological perspective, an opening to research on eye movements in music reading was the inclusion of a verbal protocol in the research design: after viewing the musical image, the readers were asked to describe what they had seen. A two-way categorization for verbal descriptions was developed in order to assess the quality of extracted musical information. More extensive musical background was related to shorter average fixation duration, more linear scanning of the musical image, and more sophisticated verbal descriptions of the music in question. No apparent effects of skill development were observed for the novice music readers alone, but all participants improved their verbal descriptions towards the last measurement. Apart from the background-related differences between groups of participants, combining verbal and eye-movement data in a cluster analysis identified three styles of silent reading. The finding demonstrated individual differences in how the freely defined silent-reading task was approached. This dissertation is among the first presentations of a series of experiments systematically addressing the visual processing of music notation in various types of music-reading tasks and focusing especially on the eye-movement indicators of developing music-reading skill. Overall, the experiments demonstrate that the music-reading processes are affected not only by “top-down” factors, such as musical background, but also by the “bottom-up” effects of specific features of music notation, such as pitch heights, metrical division, rhythmic patterns and unexpected melodic events. From a methodological perspective, the experiments emphasize the importance of systematic stimulus design, temporal control during performance tasks, and the development of complementary methods, for easing the interpretation of the eye-movement data. To conclude, this dissertation suggests that advances in comprehending the cognitive aspects of music reading, the nature of expertise in this musical task, and the development of educational tools can be attained through the systematic application of the eye-tracking methodology also in this specific domain.
Resumo:
One of the problems that slows the development of off-line programming is the low static and dynamic positioning accuracy of robots. Robot calibration improves the positioning accuracy and can also be used as a diagnostic tool in robot production and maintenance. A large number of robot measurement systems are now available commercially. Yet, there is a dearth of systems that are portable, accurate and low cost. In this work a measurement system that can fill this gap in local calibration is presented. The measurement system consists of a single CCD camera mounted on the robot tool flange with a wide angle lens, and uses space resection models to measure the end-effector pose relative to a world coordinate system, considering radial distortions. Scale factors and image center are obtained with innovative techniques, making use of a multiview approach. The target plate consists of a grid of white dots impressed on a black photographic paper, and mounted on the sides of a 90-degree angle plate. Results show that the achieved average accuracy varies from 0.2mm to 0.4mm, at distances from the target from 600mm to 1000mm respectively, with different camera orientations.