885 resultados para image analysis, gesture recognition, body recognition, computer vision, sistemi multimediali


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we report the degree of reliability of image sequences taken by off-the-shelf TV cameras for modeling camera rotation and reconstructing 3D structure using computer vision techniques. This is done in spite of the fact that computer vision systems usually use imaging devices that are specifically designed for the human vision. Our scenario consists of a static scene and a mobile camera moving through the scene. The scene is any long axial building dominated by features along the three principal orientations and with at least one wall containing prominent repetitive planar features such as doors, windows bricks etc. The camera is an ordinary commercial camcorder moving along the axial axis of the scene and is allowed to rotate freely within the range +/- 10 degrees in all directions. This makes it possible that the camera be held by a walking unprofessional cameraman with normal gait, or to be mounted on a mobile robot. The system has been tested successfully on sequence of images of a variety of structured, but fairly cluttered scenes taken by different walking cameramen. The potential application areas of the system include medicine, robotics and photogrammetry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An overview is given of a vision system for locating, recognising and tracking multiple vehicles, using an image sequence taken by a single camera mounted on a moving vehicle. The camera motion is estimated by matching features on the ground plane from one image to the next. Vehicle detection and hypothesis generation are performed using template correlation and a 3D wire frame model of the vehicle is fitted to the image. Once detected and identified, vehicles are tracked using dynamic filtering. A separate batch mode filter obtains the 3D trajectories of nearby vehicles over an extended time. Results are shown for a motorway image sequence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Nicotine has been implicated as a causative factor in the intrauterine growth retardation associated with smoking in pregnancy. A study was set up to ascertain the effect of nicotine on fetal growth and whether this could be related to the actions of this drug on maternal adipose tissue metabolism. 2. Sprague-Dawley rats were mated and assigned to control and nicotine groups, the latter receiving nicotine in the drinking-water throughout pregnancy. Animals were weighed at regular intervals and killed on day 20 of pregnancy. Rates of maternal adipose tissue lipolysis and lipogenesis were measured. Fetal and placental weights were recorded and analysis of fetal body water, fat, protein and DNA carried out. 3. Weight gains of mothers in the nicotine group were less in the 1st and 2nd weeks of pregnancy, but similar to controls in the 3rd week. Fetal body-weights, DNA, protein and percentage water contents were similar in both groups. Mean fetal body fat (g/kg) was significantly higher in the nicotine group (96.2 (SE 5.1)) compared with controls (72.0 (SE 2.9)). Rates of maternal lipolysis were also higher in the nicotine group. 4. The cause of these differences and their effects on maternal and fetal well-being is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to determine the potential of mid-infrared spectroscopy coupled with multidimensional statistical analysis for the prediction of processed cheese instrumental texture and meltability attributes. Processed cheeses (n = 32) of varying composition were manufactured in a pilot plant. Following two and four weeks storage at 4 degrees C samples were analysed using texture profile analysis, two meltability tests (computer vision, Olson and Price) and mid-infrared spectroscopy (4000-640 cm(-1)). Partial least squares regression was used to develop predictive models for all measured attributes. Five attributes were successfully modelled with varying degrees of accuracy. The computer vision meltability model allowed for discrimination between high and low melt values (R-2 = 0.64). The hardness and springiness models gave approximate quantitative results (R-2 = 0.77) and the cohesiveness (R-2 = 0.81) and Olson and Price meltability (R-2 = 0.88) models gave good prediction results. (c) 2006 Elsevier Ltd. All rights reserved..

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current state of the art and direction of research in computer vision aimed at automating the analysis of CCTV images is presented. This includes low level identification of objects within the field of view of cameras, following those objects over time and between cameras, and the interpretation of those objects’ appearance and movements with respect to models of behaviour (and therefore intentions inferred). The potential ethical problems (and some potential opportunities) such developments may pose if and when deployed in the real world are presented, and suggestions made as to the necessary new regulations which will be needed if such systems are not to further enhance the power of the surveillers against the surveilled.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a neuroscience inspired information theoretic approach to motion segmentation. Robust motion segmentation represents a fundamental first stage in many surveillance tasks. As an alternative to widely adopted individual segmentation approaches, which are challenged in different ways by imagery exhibiting a wide range of environmental variation and irrelevant motion, this paper presents a new biologically-inspired approach which computes the multivariate mutual information between multiple complementary motion segmentation outputs. Performance evaluation across a range of datasets and against competing segmentation methods demonstrates robust performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For many tasks, such as retrieving a previously viewed object, an observer must form a representation of the world at one location and use it at another. A world-based 3D reconstruction of the scene built up from visual information would fulfil this requirement, something computer vision now achieves with great speed and accuracy. However, I argue that it is neither easy nor necessary for the brain to do this. I discuss biologically plausible alternatives, including the possibility of avoiding 3D coordinate frames such as ego-centric and world-based representations. For example, the distance, slant and local shape of surfaces dictate the propensity of visual features to move in the image with respect to one another as the observer’s perspective changes (through movement or binocular viewing). Such propensities can be stored without the need for 3D reference frames. The problem of representing a stable scene in the face of continual head and eye movements is an appropriate starting place for understanding the goal of 3D vision, more so, I argue, than the case of a static binocular observer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several real problems involve the classification of data into categories or classes. Given a data set containing data whose classes are known, Machine Learning algorithms can be employed for the induction of a classifier able to predict the class of new data from the same domain, performing the desired discrimination. Some learning techniques are originally conceived for the solution of problems with only two classes, also named binary classification problems. However, many problems require the discrimination of examples into more than two categories or classes. This paper presents a survey on the main strategies for the generalization of binary classifiers to problems with more than two classes, known as multiclass classification problems. The focus is on strategies that decompose the original multiclass problem into multiple binary subtasks, whose outputs are combined to obtain the final prediction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Case-Based Reasoning is a methodology for problem solving based on past experiences. This methodology tries to solve a new problem by retrieving and adapting previously known solutions of similar problems. However, retrieved solutions, in general, require adaptations in order to be applied to new contexts. One of the major challenges in Case-Based Reasoning is the development of an efficient methodology for case adaptation. The most widely used form of adaptation employs hand coded adaptation rules, which demands a significant knowledge acquisition and engineering effort. An alternative to overcome the difficulties associated with the acquisition of knowledge for case adaptation has been the use of hybrid approaches and automatic learning algorithms for the acquisition of the knowledge used for the adaptation. We investigate the use of hybrid approaches for case adaptation employing Machine Learning algorithms. The approaches investigated how to automatically learn adaptation knowledge from a case base and apply it to adapt retrieved solutions. In order to verify the potential of the proposed approaches, they are experimentally compared with individual Machine Learning techniques. The results obtained indicate the potential of these approaches as an efficient approach for acquiring case adaptation knowledge. They show that the combination of Instance-Based Learning and Inductive Learning paradigms and the use of a data set of adaptation patterns yield adaptations of the retrieved solutions with high predictive accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cortical bones, essential for mechanical support and structure in many animals, involve a large number of canals organized in intricate fashion. By using state-of-the art image analysis and computer graphics, the 3D reconstruction of a whole bone (phalange) of a young chicken was obtained and represented in terms of a complex network where each canal was associated to an edge and every confluence of three or more canals yielded a respective node. The representation of the bone canal structure as a complex network has allowed several methods to be applied in order to characterize and analyze the canal system organization and the robustness. First, the distribution of the node degrees (i.e. the number of canals connected to each node) confirmed previous indications that bone canal networks follow a power law, and therefore present some highly connected nodes (hubs). The bone network was also found to be partitioned into communities or modules, i.e. groups of nodes which are more intensely connected to one another than with the rest of the network. We verified that each community exhibited distinct topological properties that are possibly linked with their specific function. In order to better understand the organization of the bone network, its resilience to two types of failures (random attack and cascaded failures) was also quantified comparatively to randomized and regular counterparts. The results indicate that the modular structure improves the robustness of the bone network when compared to a regular network with the same average degree and number of nodes. The effects of disease processes (e. g., osteoporosis) and mutations in genes (e.g., BMP4) that occur at the molecular level can now be investigated at the mesoscopic level by using network based approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The count intercept is a robust method for the numerical analysis of fabrics Launeau and Robin (1996). It counts the number of intersections between a set of parallel scan lines and a mineral phase, which must be identified on a digital image. However, the method is only sensitive to boundaries and therefore supposes the user has some knowledge about their significance. The aim of this paper is to show that a proper grey level detection of boundaries along scan lines is sufficient to calculate the two-dimensional anisotropy of grain or crystal distributions without any particular image processing. Populations of grains and crystals usually display elliptical anisotropies in rocks. When confirmed by the intercept analysis, a combination of a minimum of 3 mean length intercept roses, taken on 3 more or less perpendicular sections, allows the calculation of 3-dimensional ellipsoids and the determination of their standard deviation with direction and intensity in 3 dimensions as well. The feasibility of this quick method is attested by numerous examples on theoretical objects deformed by active and passive deformation, on BSE images of synthetic magma flow, on drawing or direct analysis of thin section pictures of sandstones and on digital images of granites directly taken and measured in the field. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present parallel algorithms on the BSP/CGM model, with p processors, to count and generate all the maximal cliques of a circle graph with n vertices and m edges. To count the number of all the maximal cliques, without actually generating them, our algorithm requires O(log p) communication rounds with O(nm/p) local computation time. We also present an algorithm to generate the first maximal clique in O(log p) communication rounds with O(nm/p) local computation, and to generate each one of the subsequent maximal cliques this algorithm requires O(log p) communication rounds with O(m/p) local computation. The maximal cliques generation algorithm is based on generating all maximal paths in a directed acyclic graph, and we present an algorithm for this problem that uses O(log p) communication rounds with O(m/p) local computation for each maximal path. We also show that the presented algorithms can be extended to the CREW PRAM model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern medical imaging techniques enable the acquisition of in vivo high resolution images of the vascular system. Most common methods for the detection of vessels in these images, such as multiscale Hessian-based operators and matched filters, rely on the assumption that at each voxel there is a single cylinder. Such an assumption is clearly violated at the multitude of branching points that are easily observed in all, but the Most focused vascular image studies. In this paper, we propose a novel method for detecting vessels in medical images that relaxes this single cylinder assumption. We directly exploit local neighborhood intensities and extract characteristics of the local intensity profile (in a spherical polar coordinate system) which we term as the polar neighborhood intensity profile. We present a new method to capture the common properties shared by polar neighborhood intensity profiles for all the types of vascular points belonging to the vascular system. The new method enables us to detect vessels even near complex extreme points, including branching points. Our method demonstrates improved performance over standard methods on both 2D synthetic images and 3D animal and clinical vascular images, particularly close to vessel branching regions. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A low-cost method is proposed to classify wine and whisky samples using a disposable voltammetric electronic tongue that was fabricated using gold and copper substrates and a pattern recognition technique (Principal Component Analysis). The proposed device was successfully used to discriminate between expensive and cheap whisky samples and to detect adulteration processes using only a copper electrode. For wines, the electronic tongue was composed of copper and gold working electrodes and was able to classify three different brands of wine and to make distinctions regarding the wine type, i.e., dry red, soft red, dry white and soft white brands. Crown Copyright (C) 2011 Published by Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Several methods have been utilized to prevent pericardial and retrosternal adhesions, but none of them evaluated the mesothelial regenerative hypothesis. There are evidences that the mesothelial trauma reduces pericardial fibrinolytic capability and induces an adhesion process. Keratinocyte growth factor (KGF) has proven to improve mesothelial cells proliferation. This study investigated the influence of keratinocyte growth factor in reducing post-surgical adhesions. Methods: Twelve pigs were operated and an adhesion protocol was employed. Following a stratified randomization, the animals received a topical application of KGF or saline. At 8 weeks, intrapericardial adhesions were evaluated and a severity score was established. The time spent to dissect the adhesions and the amount of sharp dissection used, were recorded. Histological sections were stained with sirius red and morphometric analyses were assessed with a computer-assisted image analysis system. Results: The severity score was lower in the KGF group than in the control group (11.5 vs 17, p = 0.005). The dissection time was lower in the KGF group (9.2 +/- 1.4 min vs 33.9 +/- 9.2 min, p = 0.004) and presented a significant correlation with the severity score (r = 0.83, p = 0.001). A significantly less sharp dissection was also required in the KGF group. Also, adhesion area and adhesion collagen were significantly tower in the KGF group than in the control group. Conclusion: The simulation of pericardial cells with KGF reduced the intensity of postoperative adhesions and facilitated the re-operation. This study suggests that the mesothelial regeneration is the new horizon in anti-adhesion therapies. (C) 2008 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.