873 resultados para visual surveillance system
Resumo:
Lack of a valid shrimp cell line has been hampering the progress of research on shrimp viruses. One of the reasons identified was the absence of an appropriate medium which would satisfy the requirements of the cells in vitro. We report the first attempt to formulate an exclusive shrimp cell culture medium (SCCM) based on the haemolymph components of Penaeus monodon prepared in isosmotic seawater having 27 % salinity. The SCCM is composed of 22 amino acids, 4 sugars, 6 vitamins, cholesterol, FBS, phenol red, three antibiotics, potassium dihydrogen phosphate and di-sodium hydrogen phosphate at pH 6.8–7.2. Osmolality was adjusted to 720 ± 10 mOsm kg-1 and temperature of incubation was 25 8C. The most appropriate composition was finally selected based on the extent of attachment of cells and their proliferation by visual observation. Metabolic activity of cultured cells was measured by MTT assay and compared with that in L-15 (29), modified L-15 and Grace’s insect medium, and found better performance in SCCM especially for lymphoid cells with 107 % increase in activity and 85 ± 9 days of longevity. The cells from ovary and lymphoid organs were passaged twice using the newly designed shrimp cell dissociation ‘‘cocktail’’.
Resumo:
Content Based Image Retrieval is one of the prominent areas in Computer Vision and Image Processing. Recognition of handwritten characters has been a popular area of research for many years and still remains an open problem. The proposed system uses visual image queries for retrieving similar images from database of Malayalam handwritten characters. Local Binary Pattern (LBP) descriptors of the query images are extracted and those features are compared with the features of the images in database for retrieving desired characters. This system with local binary pattern gives excellent retrieval performance
Resumo:
In the pastoral production systems, mobility remains the main technique used to meet livestock’s fodder requirements. Currently, with growing challenges on the pastoral production systems, there is urgent need for an in-depth understanding of how pastoralists continue to manage their grazing resources and how they determine their mobility strategies. This study examined the Borana pastoralists’ regulation of access to grazing resources, mobility practices and cattle reproductive performances in three pastoral zones of Borana region of southern Ethiopia. The central objective of the study was to contribute to the understanding of pastoral land use strategies at a scale relevant to their management. The study applied a multi-scalar methodological approach that allowed zooming in from communal to individual herd level. Through participatory mapping that applied Google Earth image print out as visual aid, the study revealed that the Borana pastoralists conceptualized their grazing areas as distinctive grazing units with names, borders, and specific characteristics. This knowledge enables the herders to communicate the condition of grazing resources among themselves in a precise way which is important in management of livestock mobility. Analysis of grazing area use from the participatory maps showed that the Borana pastoralists apportion their grazing areas into categories that are accessed at different times of the year (temporal use areas). This re-organization is an attempt by the community to cope with the prevailing constraints which results in fodder shortages especially during the dry periods. The re-organization represents a shift in resource use system, as the previous mobility practice across the ecologically varied zones of the rangelands became severely restricted. Grazing itineraries of 91 cattle herds for over 16 months obtained using the seasonal calendar interviews indicated that in the areas with the severest mobility constraints, the herders spent most of their time in the year round use areas that are within close proximity to the settlements. A significant change in mobility strategy was the disallowing of foora practice by the communities in Dirre and Malbe zones in order to reduce competition. With the reduction in mobility practices, there is a general decline in cattle reproductive parameters with the areas experiencing the severest constraints showing the least favourable reproductive performances. The study concludes that the multi-scalar methodology was well suited to zoom into pastoral grazing management practices from communal to individual herd levels. Also the loss of mobility in the Borana pastoral system affects fulfilment of livestock feed requirements thus resulting in reduced reproductive performances and herd growth potentials. While reversal of the conditions of the situations in the Borana rangelands is practically unfeasible, the findings from this research underscore the need to protect the remaining pastoral lands since the pastoral production system remains the most important livelihood option for the majority of the Borana people. In this regards the study emphasises the need to adopt and domesticate regional and international policy frameworks such as that proposed by the African Union in 2010.
Resumo:
The report describes a recognition system called GROPER, which performs grouping by using distance and relative orientation constraints that estimate the likelihood of different edges in an image coming from the same object. The thesis presents both a theoretical analysis of the grouping problem and a practical implementation of a grouping system. GROPER also uses an indexing module to allow it to make use of knowledge of different objects, any of which might appear in an image. We test GROPER by comparing it to a similar recognition system that does not use grouping.
Resumo:
This thesis presents a perceptual system for a humanoid robot that integrates abilities such as object localization and recognition with the deeper developmental machinery required to forge those competences out of raw physical experiences. It shows that a robotic platform can build up and maintain a system for object localization, segmentation, and recognition, starting from very little. What the robot starts with is a direct solution to achieving figure/ground separation: it simply 'pokes around' in a region of visual ambiguity and watches what happens. If the arm passes through an area, that area is recognized as free space. If the arm collides with an object, causing it to move, the robot can use that motion to segment the object from the background. Once the robot can acquire reliable segmented views of objects, it learns from them, and from then on recognizes and segments those objects without further contact. Both low-level and high-level visual features can also be learned in this way, and examples are presented for both: orientation detection and affordance recognition, respectively. The motivation for this work is simple. Training on large corpora of annotated real-world data has proven crucial for creating robust solutions to perceptual problems such as speech recognition and face detection. But the powerful tools used during training of such systems are typically stripped away at deployment. Ideally they should remain, particularly for unstable tasks such as object detection, where the set of objects needed in a task tomorrow might be different from the set of objects needed today. The key limiting factor is access to training data, but as this thesis shows, that need not be a problem on a robotic platform that can actively probe its environment, and carry out experiments to resolve ambiguity. This work is an instance of a general approach to learning a new perceptual judgment: find special situations in which the perceptual judgment is easy and study these situations to find correlated features that can be observed more generally.
Resumo:
This paper describes a general, trainable architecture for object detection that has previously been applied to face and peoplesdetection with a new application to car detection in static images. Our technique is a learning based approach that uses a set of labeled training data from which an implicit model of an object class -- here, cars -- is learned. Instead of pixel representations that may be noisy and therefore not provide a compact representation for learning, our training images are transformed from pixel space to that of Haar wavelets that respond to local, oriented, multiscale intensity differences. These feature vectors are then used to train a support vector machine classifier. The detection of cars in images is an important step in applications such as traffic monitoring, driver assistance systems, and surveillance, among others. We show several examples of car detection on out-of-sample images and show an ROC curve that highlights the performance of our system.
Resumo:
To recognize a previously seen object, the visual system must overcome the variability in the object's appearance caused by factors such as illumination and pose. Developments in computer vision suggest that it may be possible to counter the influence of these factors, by learning to interpolate between stored views of the target object, taken under representative combinations of viewing conditions. Daily life situations, however, typically require categorization, rather than recognition, of objects. Due to the open-ended character both of natural kinds and of artificial categories, categorization cannot rely on interpolation between stored examples. Nonetheless, knowledge of several representative members, or prototypes, of each of the categories of interest can still provide the necessary computational substrate for the categorization of new instances. The resulting representational scheme based on similarities to prototypes appears to be computationally viable, and is readily mapped onto the mechanisms of biological vision revealed by recent psychophysical and physiological studies.
Resumo:
Numerous psychophysical experiments have shown an important role for attentional modulations in vision. Behaviorally, allocation of attention can improve performance in object detection and recognition tasks. At the neural level, attention increases firing rates of neurons in visual cortex whose preferred stimulus is currently attended to. However, it is not yet known how these two phenomena are linked, i.e., how the visual system could be "tuned" in a task-dependent fashion to improve task performance. To answer this question, we performed simulations with the HMAX model of object recognition in cortex [45]. We modulated firing rates of model neurons in accordance with experimental results about effects of feature-based attention on single neurons and measured changes in the model's performance in a variety of object recognition tasks. It turned out that recognition performance could only be improved under very limited circumstances and that attentional influences on the process of object recognition per se tend to display a lack of specificity or raise false alarm rates. These observations lead us to postulate a new role for the observed attention-related neural response modulations.
Resumo:
This paper deals with the problem of navigation for an unmanned underwater vehicle (UUV) through image mosaicking. It represents a first step towards a real-time vision-based navigation system for a small-class low-cost UUV. We propose a navigation system composed by: (i) an image mosaicking module which provides velocity estimates; and (ii) an extended Kalman filter based on the hydrodynamic equation of motion, previously identified for this particular UUV. The obtained system is able to estimate the position and velocity of the robot. Moreover, it is able to deal with visual occlusions that usually appear when the sea bottom does not have enough visual features to solve the correspondence problem in a certain area of the trajectory
Resumo:
This paper focuses on the problem of realizing a plane-to-plane virtual link between a camera attached to the end-effector of a robot and a planar object. In order to do the system independent to the object surface appearance, a structured light emitter is linked to the camera so that 4 laser pointers are projected onto the object. In a previous paper we showed that such a system has good performance and nice characteristics like partial decoupling near the desired state and robustness against misalignment of the emitter and the camera (J. Pages et al., 2004). However, no analytical results concerning the global asymptotic stability of the system were obtained due to the high complexity of the visual features utilized. In this work we present a better set of visual features which improves the properties of the features in (J. Pages et al., 2004) and for which it is possible to prove the global asymptotic stability
Resumo:
In a search for new sensor systems and new methods for underwater vehicle positioning based on visual observation, this paper presents a computer vision system based on coded light projection. 3D information is taken from an underwater scene. This information is used to test obstacle avoidance behaviour. In addition, the main ideas for achieving stabilisation of the vehicle in front of an object are presented
Resumo:
This lab follows the lecture 'System Design: Using UML Use Cases'http://www.edshare.soton.ac.uk/9619/ It introduces Visual Paradigm as a UML modelling tool. Students work through Visual Paradigm online Tutorials and then create two projects.
Resumo:
This lab follows the lectures 'System Design: http://www.edshare.soton.ac.uk/9653/ and http://www.edshare.soton.ac.uk/6280/ . Students use Visual Paradigm for UML to build Activity and Sequence models through project examples: Library, Plant Nursery and a Health Spa
Resumo:
This lab follows the lectures 'System Design: http://www.edshare.soton.ac.uk/6280/ http://www.edshare.soton.ac.uk/9653/ and http://www.edshare.soton.ac.uk/9713/ Students use Visual Paradigm for UML to build Class models through project examples: Aircraft Manufacturing Company, Library, Plant Nursery.
Resumo:
This thesis proposes a solution to the problem of estimating the motion of an Unmanned Underwater Vehicle (UUV). Our approach is based on the integration of the incremental measurements which are provided by a vision system. When the vehicle is close to the underwater terrain, it constructs a visual map (so called "mosaic") of the area where the mission takes place while, at the same time, it localizes itself on this map, following the Concurrent Mapping and Localization strategy. The proposed methodology to achieve this goal is based on a feature-based mosaicking algorithm. A down-looking camera is attached to the underwater vehicle. As the vehicle moves, a sequence of images of the sea-floor is acquired by the camera. For every image of the sequence, a set of characteristic features is detected by means of a corner detector. Then, their correspondences are found in the next image of the sequence. Solving the correspondence problem in an accurate and reliable way is a difficult task in computer vision. We consider different alternatives to solve this problem by introducing a detailed analysis of the textural characteristics of the image. This is done in two phases: first comparing different texture operators individually, and next selecting those that best characterize the point/matching pair and using them together to obtain a more robust characterization. Various alternatives are also studied to merge the information provided by the individual texture operators. Finally, the best approach in terms of robustness and efficiency is proposed. After the correspondences have been solved, for every pair of consecutive images we obtain a list of image features in the first image and their matchings in the next frame. Our aim is now to recover the apparent motion of the camera from these features. Although an accurate texture analysis is devoted to the matching pro-cedure, some false matches (known as outliers) could still appear among the right correspon-dences. For this reason, a robust estimation technique is used to estimate the planar transformation (homography) which explains the dominant motion of the image. Next, this homography is used to warp the processed image to the common mosaic frame, constructing a composite image formed by every frame of the sequence. With the aim of estimating the position of the vehicle as the mosaic is being constructed, the 3D motion of the vehicle can be computed from the measurements obtained by a sonar altimeter and the incremental motion computed from the homography. Unfortunately, as the mosaic increases in size, image local alignment errors increase the inaccuracies associated to the position of the vehicle. Occasionally, the trajectory described by the vehicle may cross over itself. In this situation new information is available, and the system can readjust the position estimates. Our proposal consists not only in localizing the vehicle, but also in readjusting the trajectory described by the vehicle when crossover information is obtained. This is achieved by implementing an Augmented State Kalman Filter (ASKF). Kalman filtering appears as an adequate framework to deal with position estimates and their associated covariances. Finally, some experimental results are shown. A laboratory setup has been used to analyze and evaluate the accuracy of the mosaicking system. This setup enables a quantitative measurement of the accumulated errors of the mosaics created in the lab. Then, the results obtained from real sea trials using the URIS underwater vehicle are shown.