903 resultados para night vision system
Resumo:
Positioning a robot with respect to objects by using data provided by a camera is a well known technique called visual servoing. In order to perform a task, the object must exhibit visual features which can be extracted from different points of view. Then, visual servoing is object-dependent as it depends on the object appearance. Therefore, performing the positioning task is not possible in presence of nontextured objets or objets for which extracting visual features is too complex or too costly. This paper proposes a solution to tackle this limitation inherent to the current visual servoing techniques. Our proposal is based on the coded structured light approach as a reliable and fast way to solve the correspondence problem. In this case, a coded light pattern is projected providing robust visual features independently of the object appearance
Resumo:
Colour image segmentation based on the hue component presents some problems due to the physical process of image formation. One of that problems is colour clipping, which appear when at least one of the sensor components is saturated. We have designed a system, that works for a trained set of colours, to recover the chromatic information of those pixels on which colour has been clipped. The chromatic correction method is based on the fact that hue and saturation are invariant to the uniform scaling of the three RGB components. The proposed method has been validated by means of a specific colour image processing board that has allowed its execution in real time. We show experimental results of the application of our method
Resumo:
In the U.K., dental students require to perform training and practice on real human tissues at the very early stage of their courses. Currently, the human tissues, such as decayed teeth, are mounted in a human head like physical model. The problems with these models in teaching are; (1) every student operates on tooth, which are always unique; (2) the process cannot be recorded for examination purposes and (3) same training are not repeatable. The aim of the PHATOM Project is to develop a dental training system using Haptic technology. This paper documents the project background, specification, research and development of the first prototype system. It also discusses the research in the visual display, haptic devices and haptic rendering. This includes stereo vision, motion parallax, volumetric modelling, surface remapping algorithms as well as analysis design of the system. A new volumetric to surface model transformation algorithm is also introduced. This paper includes the future work on the system development and research.
Resumo:
In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision.
Resumo:
We have developed a heterologous expression system for transmembrane lens main intrinsic protein (MIP) in Nicotiana tabacum plant tissue. A native bovine MIP26 amplicon was subcloned into an expression cassette under the control of a constitutive Cauliflower Mosaic Virus promoter, also containing a neomycin phosphotransferase operon. This cassette was transformed into Agrobacterium tumefaciens by triparental mating and used to infect plant tissue grown in culture. Recombinant plants were selected by their ability to grow and root on kanamycin-containing media. The presence of MIP in the plant tissues was confirmed by PCR, RT-PCR and immunohistochemistry. A number of benefits of this system for the study of MIP will be discussed, and also its application as a tool for the study of heterologously expressed proteins in general.
Resumo:
A wind catcher/tower natural ventilation system was installed in a seminar room in the building of the School of Construction Management and Engineering, the University of Reading in the UK . Performance was analysed by means of ventilation tracer gas measurements, indoor climate measurements (temperature, humidity, CO2) and occupant surveys. In addition, the potential of simple design tools was evaluated by comparing observed ventilation results with those predicted by an explicit ventilation model and the AIDA implicit ventilation model. To support this analysis, external climate parameters (wind speed and direction, solar radiation, external temperature and humidity) were also monitored. The results showed the chosen ventilation design provided a substantially greater ventilation rate than an equivalent area of openable window. Also air quality parameters stayed within accepted norms while occupants expressed general satisfaction with the system and with comfort conditions. Night cooling was maximised by using the system in combination with openable windows. Comparisons of calculations with ventilation rate measurements showed that while AIDA gave reasonably correlated results with the monitored performance results, the widely used industry explicit model was found to over estimate the monitored ventilation rate.
Resumo:
Under the framework of the European Union Funded SAFEE project(1), this paper gives an overview of a novel monitoring and scene analysis system developed for use onboard aircraft in spatially constrained environments. The techniques discussed herein aim to warn on-board crew about pre-determined indicators of threat intent (such as running or shouting in the cabin), as elicited from industry and security experts. The subject matter experts believe that activities such as these are strong indicators of the beginnings of undesirable chains of events or scenarios, which should not be allowed to develop aboard aircraft. This project aimes to detect these scenarios and provide advice to the crew. These events may involve unruly passengers or be indicative of the precursors to terrorist threats. With a state of the art tracking system using homography intersections of motion images, and probability based Petri nets for scene understanding, the SAFEE behavioural analysis system automatically assesses the output from multiple intelligent sensors, and creates. recommendations that are presented to the crew using an integrated airborn user interface. Evaluation of the system is conducted within a full size aircraft mockup, and experimental results are presented, showing that the SAFEE system is well suited to monitoring people in confined environments, and that meaningful and instructive output regarding human actions can be derived from the sensor network within the cabin.
Resumo:
A discharge-flow system, coupled to cavity-enhanced absorption spectroscopy (CEAS) detection systems for NO3 at lambda = 662 nm and NO2 at lambda = 404 nm, was used to investigate the kinetics of the reactions of NO3 with eight peroxy radicals at P similar to 5 Torr and T similar to 295 K. Values of the rate constants obtained were (k/10(-12) cm(3) molecule(-1) s(-1)): CH3O2 (1.1 +/- 0.5), C2H5O2 (2.3 +/- 0.7), CH2FO2 (1.4 +/- 0.9), CH2ClO2 (3.8(-2.6)(+1.4)), c-C5H9O2 (1.2(-0.5)(+1.1)), c-C6H11O2 (1.9 +/- 0.7), CF3O2 (0.62 +/- 0.17) and CF3CFO2CF3 (0.24 +/- 0.13). We explore possible relationships between k and the orbital energies of the reactants. We also provide a brief discussion of the potential impact of the reactions of NO3 with RO2 on the chemistry of the night-time atmosphere.
Resumo:
This article describes an application of computers to a consumer-based production engineering environment. Particular consideration is given to the utilisation of low-cost computer systems for the visual inspection of components on a production line in real time. The process of installation is discussed, from identifying the need for artificial vision and justifying the cost, through to choosing a particular system and designing the physical and program structure.
Resumo:
Researchers in the rehabilitation engineering community have been designing and developing a variety of passive/active devices to help persons with limited upper extremity function to perform essential daily manipulations. Devices range from low-end tools such as head/mouth sticks to sophisticated robots using vision and speech input. While almost all of the high-end equipment developed to date relies on visual feedback alone to guide the user providing no tactile or proprioceptive cues, the “low-tech” head/mouth sticks deliver better “feel” because of the inherent force feedback through physical contact with the user's body. However, the disadvantage of a conventional head/mouth stick is that it can only function in a limited workspace and the performance is limited by the user's strength. It therefore seems reasonable to attempt to develop a system that exploits the advantages of the two approaches: the power and flexibility of robotic systems with the sensory feedback of a headstick. The system presented in this paper reflects the design philosophy stated above. This system contains a pair of master-slave robots with the master being operated by the user's head and the slave acting as a telestick. Described in this paper are the design, control strategies, implementation and performance evaluation of the head-controlled force-reflecting telestick system.
Resumo:
This paper presents a review of the design and development of the Yorick series of active stereo camera platforms and their integration into real-time closed loop active vision systems, whose applications span surveillance, navigation of autonomously guided vehicles (AGVs), and inspection tasks for teleoperation, including immersive visual telepresence. The mechatronic approach adopted for the design of the first system, including head/eye platform, local controller, vision engine, gaze controller and system integration, proved to be very successful. The design team comprised researchers with experience in parallel computing, robot control, mechanical design and machine vision. The success of the project has generated sufficient interest to sanction a number of revisions of the original head design, including the design of a lightweight compact head for use on a robot arm, and the further development of a robot head to look specifically at increasing visual resolution for visual telepresence. The controller and vision processing engines have also been upgraded, to include the control of robot heads on mobile platforms and control of vergence through tracking of an operator's eye movement. This paper details the hardware development of the different active vision/telepresence systems.
Resumo:
Food is fundamental to human wellbeing and development. Increased food production remains a cornerstone strategy in the effort to alleviate global food insecurity. But despite the fact that global food production over the past half century has kept ahead of demand, today around one billion people do not have enough to eat, and a further billion lack adequate nutrition. Food insecurity is facing mounting supply-side and demand-side pressures; key among these are climate change, urbanisation, globalisation, population increases, disease, as well as a number of other factors that are changing patterns of food consumption. Many of the challenges to equitable food access are concentrated in developing countries where environmental pressures including climate change, population growth and other socio-economic issues are concentrated. Together these factors impede people's access to sufficient, nutritious food; chiefly through affecting livelihoods, income and food prices. Food security and human development go hand in hand, and their outcomes are co-determined to a significant degree. The challenge of food security is multi-scalar and cross-sector in nature. Addressing it will require the work of diverse actors to bring sustained improvements inhuman development and to reduce pressure on the environment. Unless there is investment in future food systems that are similarly cross-level, cross-scale and cross-sector, sustained improvements in human wellbeing together with reduced environmental risks and scarcities will not be achieved. This paper reviews current thinking, and outlines these challenges. It suggests that essential elements in a successfully adaptive and proactive food system include: learning through connectivity between scales to local experience and technologies high levels of interaction between diverse actors and sectors ranging from primary producers to retailers and consumers, and use of frontier technologies.
Resumo:
This paper describes a visual stimulus generator (VSImG) capable of displaying a gray-scale, 256 x 256 x 8 bitmap image with a frame rate of 500 Hz using a boustrophedonic scanning technique. It is designed for experiments with motion-sensitive neurons of the fly`s visual system, where the flicker fusion frequency of the photoreceptors can reach up to 500 Hz. Devices with such a high frame rate are not commercially available, but are required, if sensory systems with high flicker fusion frequency are to be studied. The implemented hardware approach gives us complete real-time control of the displacement sequence and provides all the signals needed to drive an electrostatic deflection display. With the use of analog signals, very small high-resolution displacements, not limited by the image`s pixel size can be obtained. Very slow image displacements with visually imperceptible steps can also be generated. This can be of interest for other vision research experiments. Two different stimulus files can be used simultaneously, allowing the system to generate X-Y displacements on one display or independent movements on two displays as long as they share the same bitmap image. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The project introduces an application using computer vision for Hand gesture recognition. A camera records a live video stream, from which a snapshot is taken with the help of interface. The system is trained for each type of count hand gestures (one, two, three, four, and five) at least once. After that a test gesture is given to it and the system tries to recognize it.A research was carried out on a number of algorithms that could best differentiate a hand gesture. It was found that the diagonal sum algorithm gave the highest accuracy rate. In the preprocessing phase, a self-developed algorithm removes the background of each training gesture. After that the image is converted into a binary image and the sums of all diagonal elements of the picture are taken. This sum helps us in differentiating and classifying different hand gestures.Previous systems have used data gloves or markers for input in the system. I have no such constraints for using the system. The user can give hand gestures in view of the camera naturally. A completely robust hand gesture recognition system is still under heavy research and development; the implemented system serves as an extendible foundation for future work.